title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
LSCD: Lomb--Scargle Conditioned Diffusion for Time series Imputation
Accept (poster)
Summary: The paper proposes LSCD for irregular time series imputation, which uses Lomb-Scargle periodograms to compute additional frequency domain loss in the diffusion model. LSCD uses Lomb-Scargle instead of FFT to transform irregular time series into the frequency domain and achieves more accurate imputation results. Claims And Evidence: From line 21 to line 23, the abstract claims that the method imputes without requiring imputation "in the frequency domain", which seems to conflict with the statement "prior to frequency estimation" from line 17 to line 18. Also, imputing in the time domain instead of the frequency domain is more prevalent, so I suspect the claim is wrong. Methods And Evaluation Criteria: In Figure 2, there are two questionable designs. First, in the calculation of consistency loss, the ground-truth spectrum is calculated using masked "observed condition" instead of the original "observed time series". Second, "imputation target" and "noisy imputation target" are plotted mostly the same, which is confusing. Since the code is unavailable in the provided anonymous github link, and LSCD is based on CSDI [1], I assume LSCD uses a similar approach in handling the diffusion and denoising process. According to the implementation of CSDI, "noisy imputation target" should be distorted with random noise instead of shifted compared to the "imputation target". [1] Y. Tashiro, J. Song, Y. Song, and S. Ermon, "CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation," in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2021, pp. 24804–24816. Theoretical Claims: I check the correctness of conditional diffusion in section 3.3. Experimental Designs Or Analyses: Experimental designs are not convincing. First, Lomb-Scargle is an approach relying heavily on the assumption of sinusoidal signal, which makes it unfair to compare the baselines on synthetic sines datasets. Also, unlike fully observed regular datasets, which usually exhibit clear seasonal patterns, real-world irregular time series usually lack such patterns, making them significantly different from synthetic sines datasets. Moreover, I think frequency domain patterns are not directly associated with data distributions shown in Figure 5, where data distributions are just statistical results on observed values. Therefore, visualizing data distributions cannot prove the effectiveness of Lomb-Scargle on real-world irregular time series datasets. Lastly, since Lomb-Scargle is more computationally expensive than FFT, an analysis of training time and inference time is necessary to evaluate the model. Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The main contribution is the usage of frequency domain information in the diffusion model. However, compared to previous work CSDI [1], the introduction of the diffusion model in Figures and the experimental comparisons are not convincing enough. [1] Y. Tashiro, J. Song, Y. Song, and S. Ermon, "CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation," in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2021, pp. 24804–24816. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Other Weaknesses: • The code is not available in the provided URL at line 34. • Math symbols in Figure 2 are in low resolution. • I believe irregular time series are not equal to time series with missing values. The paper mainly discusses the missingness scenarios in Figure 3. However, unalignments and irregular time intervals in real-world irregular time series are caused by different sampling rates of different variables, which is not the same as regularly sampled time series with missing values. Therefore, Using incomplete synthetic data to simulate irregular time series is unreasonable. Although CSDI [1] also uses irregular time series datasets to prove its effectiveness, its task is "probabilistic time series imputation", which is a more general task that includes irregular time series imputation. [1] Y. Tashiro, J. Song, Y. Song, and S. Ermon, "CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation," in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2021, pp. 24804–24816. Other Comments Or Suggestions: N/A Questions For Authors: What's the research purpose of irregular time series imputation, especially on medical datasets? For example, in Physionet'12, samples are collected during patients' 48 hours of stay in ICU, where the values of different features are sampled at different frequencies as scheduled. The missingness is informative to indicate different sampling rates. Therefore, imputation for these medical irregular times series seems to lack proper motivations. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their careful assessment of our paper. Below, we provide detailed responses aimed at clarifying and addressing each concern.   ### Claims and Evidence Thank you for pointing out the discrepancy: in line 22, "frequency domain" should be "time domain". We have updated the text accordingly.   ### Methods And Evaluation Criteria (Figure 2) **Consistency loss:** We can confirm that the diagram correctly reflects the architecture, and that using $x_{co}$ instead of the "observed time series" is done by design: during training, we sample a conditional mask $m_{co}$ from the observed values and treat them as $x_{co}$. The consistency loss then ensures the reconstruction matches the frequency profile of these masked ‘observed’ values, which simulates the inference setting. This follows common practice in score-based diffusion training, including CSDI. **Noisy imputation target:** We have updated the figure so the effect of adding noise is more evident and does not resemble a shift. **Math symbols:** We have now increased the resolution of math symbols in the figure.   ### Experimental Design or Analyses **Use of Sine Datasets:** We acknowledge that the Lomb-Scargle method is well-suited to signals with sinusoidal components. The synthetic sine datasets were not intended to favor our method but rather to offer a controlled and interpretable setting where periodicity is known, in order to evaluate the recovery of ground truth frequency information. To mitigate potential bias, we also include evaluations on real-world irregular time series in Table 2. These results confirm that Lomb–Scargle conditioning remains helpful for imputation, even if the spectrum does not exhibit strong periodicity. **Figure 5:** We appreciate this observation and agree that the marginal distribution plots in Figure 5 do not, on their own, validate the model’s frequency reconstruction. The main purpose of these plots is to show that our imputed values align well with the observed data in the time-domain distributional sense (e.g. capturing the overall range and shape). For spectral validation, we rely on additional spectrum-based metrics (S-MAE) and Lomb–Scargle comparisons (Figure 4). We will clarify in the revision that Figure 5 primarily illustrates distributional consistency, while the Lomb–Scargle-based evaluations confirm how well our model preserves important frequency characteristics. **Computational Cost:** Please refer to the response to Reviewer 3 (**ZpA3**) for a detailed analysis of computational efficiency of our method.   ### Code Availability We sincerely apologize for the delayed submission of the source code. The upload was unfortunately postponed due to an internal review process, but it is now publicly available in the anonymous repository linked in the abstract.   ### Weaknesses We would like to highlight that Table 2 of our paper reports experiments on two real-world datasets (PhysioNet and PM2.5), demonstrating that our approach is not limited to synthetic scenarios. In these real-data settings, our method achieves consistent improvements over baseline imputation models, underscoring its practical applicability beyond the controlled environment of Table 1. Furthermore, like CSDI, our framework remains a probabilistic time series imputation method. Although we incorporate Lomb–Scargle conditioning, the underlying diffusion-based approach is unchanged. It still produces a distribution over missing values rather than a single deterministic estimate.   ### Question for Authors (Motivation) Even though missingness can reflect the inherent sampling schedule, imputation remains a standard practice in the medical time series literature for two main reasons: - Many predictive tasks, such as patient mortality prediction, rely on uniform or complete data inputs. Empirical results across multiple papers show that having an imputed time grid often improves classification or regression performance on these tasks, with respect to grids with missing data. - A large number of existing methods and neural architectures assume regularly spaced inputs. Imputation is thus commonly used to align varied medical measurements (e.g. vitals, labs) to a single grid, making it easier to integrate or compare multiple signals and to apply standard machine learning pipelines that are not natively designed for irregular sampling. We will integrate this content into the manuscript as motivation for the task of irregular time series imputation. --- Rebuttal Comment 1.1: Comment: ## Experimental Design or Analyses **Use of Sine Datasets**: I acknowledge that sine datasets are designed to provide an intuitive understanding on model’s performance. However, since the paper is titled “Irregular Time Series Imputation”, it should focus on the analysis on real-world irregular time series datasets (i.e., Table 2, 3). Table 1 is not strongly correlated to irregular time series imputation task, while taking too much spaces. ## Motivation **Reason 1**: The authors said “rely on uniform or complete data inputs”, “Empirical results across multiple papers” and “often improves classification or regression performance”, but did not provide any supporting materials to prove these statements. In my option, these are weak statements for the following reasons: 1. Widely researched medical time series datasets are irregular time series datasets: (1) PhysioNet’2012 [1]; (2) PhysioNet’2019 [2]; (3) MIMIC III [3]; (4) MIMIC IV [4]. Some related works on these medical irregular time series datasets: [5-7] 2. Error accumulation can make the prediction worse, where errors in imputed values affect subsequent predictions [8]. Therefore, imputation is not a necessary prerequisite for accurate prediction. [1] Silva, Ikaro, et al. “Predicting In-Hospital Mortality of ICU Patients: The PhysioNet/Computing in Cardiology Challenge 2012.” Computing in Cardiology, vol. 39, 2012, pp. 245–48. [2] Reyna, Matthew A., et al. “Early Prediction of Sepsis From Clinical Data: The PhysioNet/Computing in Cardiology Challenge 2019.” Critical Care Medicine, vol. 48, no. 2, Feb. 2020, p. 210. [3] Johnson, Alistair E. W., et al. “MIMIC-III, a Freely Accessible Critical Care Database.” Scientific Data, vol. 3, no. 1, 1, May 2016, p. 160035. [4] Johnson, Alistair E. W., et al. “MIMIC-IV, a Freely Accessible Electronic Health Record Dataset.” Scientific Data, vol. 10, no. 1, 1, Jan. 2023, p. 1. [5] Luo, Yicheng, et al. Knowledge-Empowered Dynamic Graph Network for Irregularly Sampled Medical Time Series. NeurIPS 2024. [6] Wu, Zhenbang, et al. An Iterative Self-Learning Framework for Medical Domain Generalization. NeurIPS 2023. [7] Jarrett, Daniel, et al. Clairvoyance: A Pipeline Toolkit for Medical Time Series. ICLR 2021 [8] Wu, Sifan, et al. Adversarial Sparse Transformer for Time Series Forecasting. NeurIPS 2020 --- In my opinion, this paper should be titled as “Probabilistic Time Series Imputation” or “ Incomplete Time Series Imputation” instead of “Irregular Time Series Imputation”, since the improvement (Lomb-Scargle) has no strong correlation with the unique properties of irregular time series (i.e., irregular time interval within each variable, and unaligned observations across different variables). The paper should be revised to be aligned with the task in title. --- Reply to Comment 1.1.1: Comment: Thank you for the thoughtful evaluation of our work. We would like to clarify that Lomb-Scargle does indeed support irregularly sampled time series natively. This motivated our initial decision to use the term "irregular" in the title. However, we recognize that our primary experimental focus has been on scenarios involving partially observed time series. Accordingly, we have removed "irregular" from the title and clarified across the text that our approach primarily targets missing values. We hope this revision better reflects the scope of our work and the role of Lomb-Scargle in preserving spectral information on data with missing values. **SINES DATASET** We previously mentioned the advantage of the sine dataset for providing ground truth information on the signal's spectrum. An additional advantage of this dataset that we would like to emphasize, is that it allows us to systematically explore the performance of the imputation methods across different types of missingness (POINT, SEQ, BLOCK). **MOTIVATION** We apologize for not providing specific references in our previous reply. Below, we offer clarifications and cite recent works that motivate why imputation can still be beneficial in downstream tasks. Time series imputation allows for a broader variety of models to be applied to prediction tasks on clinical data. We agree that not every scenario strictly requires imputation, some state-of-the-art models can process irregular data directly. However, imputation remains a practical necessity in many real-world pipelines because it allows to leverage methods that rely on uniformly spaced or complete inputs, rather than restricting them to specialized architectures [1]. Moreover, there is empirical evidence that high-quality imputation can boost downstream performance in tasks like mortality prediction [2,3,4,5,6]. However, imputation quality does not always correlate with gains in downstream tasks, which has motivated further research on building imputation methods that not only reconstruct missing values but also improve final predictive metrics [7,8]. These observations illustrate that while imputation may not be always required, it remains an important and active research topic, in particular to extend the pool of applicable models in clinical time series analysis. [1] Shadbahr, T., et al. "The impact of imputation quality on machine learning classifiers for datasets with missing values". Communications Medicine 3, 139 (2023). [2] Want, J., et al. "Deep Learning for Multivariate Time Series Imputation: A Survey". Arxiv, 2025. [3] Du et al. "SAITS: Self-Attention-based Imputation for Time Series". Expert Systems with Applications, 2023. [4] Du, W., et al. "TSI-Bench: Benchmarking Time Series Imputation". Arxiv, 2024. [5] J. Yoon, et al. "Estimating Missing Data in Temporal Data Streams Using Multi-Directional Recurrent Neural Networks," in IEEE Transactions on Biomedical Engineering, vol. 66, no. 5, pp. 1477-1490,2019. [6] Luo, Y., et al. "Multivariate Time Series Imputation with Generative Adversarial Networks". Neurips, 2018. [7] Wang, Z., et al. "Task-oriented Time Series Imputation Evaluation via Generalized Representers". Neurips 2024. [8] Jarrett, Daniel, et al. Clairvoyance: A Pipeline Toolkit for Medical Time Series. ICLR 2021
Summary: The paper proposes a novel method designed for performing time series imputation when the input data either has missing data of is not measured at equal intervals. The use of discrete Fourier transform in this case often leads to serious artifacts in the power density spectrum. In contrast, the power density spectrum can be more accurately estimated using the Lomb-Scargle methods. This paper shows how this can be incorporated into a diffusion-based method to ensure the frequency information is well captured in imputed time series. The technique is thoroughly tested and compared against strong baseline models. Claims And Evidence: The claim is that this novel approach provides better imputation of this type of time series data. The empirical evidence appears compelling. Methods And Evaluation Criteria: As far as I can see the evaluations are fair and rigorous. Theoretical Claims: N/A. Experimental Designs Or Analyses: No. Supplementary Material: Briefly, but the background looked interesting (I read through appendix A) and there is extensive additional empirical evidence provided. Relation To Broader Scientific Literature: The paper appears to be well versed in the literature around time series. Essential References Not Discussed: None. Other Strengths And Weaknesses: The paper is very well written and accessible. The contribution is novel and substantial. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their positive assessment and kind words about our work. We appreciate that you find the method’s contribution to be novel and substantial, and that our empirical evaluations appear fair and rigorous --- Below we include a general analysis of the computational efficiency of our method, requested by several reviewers.   ### Computation Time We present an analysis of the computational speed of our method (LSCD). A common concern in the reviews was the increased computational cost of Lomb-Scargle with respect to FFT. Indeed, the computational complexity of FFT is $O(N\log{N})$, while Lomb-Scargle's complexity is $O(N \cdot J)$, where $N$ is the number of points in the grid, and $J$ is the number of chosen frequencies. In our method, we use $J = N$ thus the complexity of LS results $O(N^2)$. In effect, for the grid sizes used in PhysioNet and PM2.5, our differentiable implementation of **Lomb-Scargle has a computational cost that is $\mathbf{50}$ times higher than FFT** ($1.25\times10^{-2} s$ vs $2.46\times10^{-4} s$). However, the cost of this operation does not translate directly to the computation time of our LSCD model. In Tables R1 and R2 we show a comparison of training and inference computation times respectively, for the LSCD and CSDI models, evaluated on PhysioNet and PM2.5 datasets. All computations in this analysis were performed using a g5.2xlarge AWS instance (AMD EPYC 7R32 CPU, with an Nvidia A10G 24 GB GPU). As shown in the tables, **the percentage increase in computation time is approx. $9\%$ for training and $13\%$ for inference**. However, the training of our method includes a final fine-tuning stage using the spectral consistency loss $\mathcal{L}_{SCons}$ from Section 4.3, which takes $288.7\,s/ep$ for PhysioNet and $430.3\,s/ep$ for PM2.5, due to requiring running the inference pipeline as part of the computation. Considering this step together with the full training process, **our method resulted in an additional $43\%$ training time for PhysioNet and $45\%$ for PM2.5**. This analysis has been incorporated into the manuscript. **Table R1: Training time per epoch for CSDI and LSCD.** The last column shows the relative increase for LSCD. Measurements were averaged over $10$ epochs. | | CSDI (s/epoch) | LSCD (s/epoch) | $\Delta$Time (%) | |---------------|----------------|----------------|------------------| | PhysioNet | 10.30 | 11.18 | 8.5% | | PM2.5 | 14.82 | 16.23 | 9.5% | **Table R2: Inference time per batch for CSDI and LSCD (batch size = 16).** The last column shows the relative increase for LSCD. Measurements were averaged over $5$ batches. | | CSDI (s/batch) | LSCD (s/batch) | $\Delta$Time (%) | |---------------|----------------|----------------|------------------| | PhysioNet | 88.19 | 99.18 | 13.3% | | PM2.5 | 69.36 | 78.58 | 12.5% | --- Rebuttal Comment 1.1: Comment: Thank you for your response. I stand by my score. For me this was a paper I enjoyed reading. Clearly Lomb-Scargle methods have been used before (presumably by Lomb and Scargle as well as others), but I agree with the authors view that the originality comes from integrating this into a modern differentiable programming architecture. This is a non-trivial contribution that is often underestimated. Given the other other literature that uses this method predates deep learning, I am still of the view that integrating Lomb-Scargle into a deep learning framework is a novel contribution. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their feedback and continuous support of our work, and for acknowledging the significance of integrating Lomb-Scargle into modern differentiable architectures. We are happy to hear that the reviewer enjoyed reading our paper.
Summary: This paper introduces a novel diffusion-based time series imputation method. Specifically designed for irregularly sampled data, the proposed method leverages the Lomb-Scargle periodogram to enhance imputation performance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical results are provided in the paper. Experimental Designs Or Analyses: Yes. The experimental design is reasonable. Supplementary Material: Yes, all parts. Relation To Broader Scientific Literature: The idea of using Lomb-Scargle Periodogram for time series modelling is not novel. See [1] Glynn, Earl F., Jie Chen, and Arcady R. Mushegian. "Detecting periodic patterns in unevenly spaced gene expression time series using Lomb–Scargle periodograms." Bioinformatics 22, no. 3 (2006): 310-316. [2] Ruf, T. "The Lomb-Scargle periodogram in biological rhythm research: analysis of incomplete and unequally spaced time-series." Biological Rhythm Research 30, no. 2 (1999): 178-201. [3] Van Dongen, H. P. A., E. Olofsen, J. H. Van Hartevelt, and E. W. Kruyt. "A procedure of multiple period searching in unequally spaced time-series with the Lomb–Scargle method." Biological Rhythm Research 30, no. 2 (1999): 149-177. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1) The idea of using the Lomb-Scargle periodogram to impute unevenly distributed time series data is reasonable. 2) The proposed method is assessed on different time series datasets. Weakness: 1) The novelty of the proposed method is limited. Although incorporating the Lomb–Scargle periodogram into the diffusion model is relatively new, the idea of using it to deal with time series data has been studied by many previous works and the proposed framework is still mainly based on CSDI with only small changes. 2) From results presented in Table 2, the proposed method did not show significant empirical performance improvement compared to other diffusion-based method that did not use spectral method, i.e., CSDI. 3) The theoretical results regarding the effect of the Lomb-Scargle periodogram on the diffusion process are missing. Other Comments Or Suggestions: The authors could provide a theoretical analysis of how the Lomb-Scargle periodogram influences the diffusion process. Questions For Authors: How is the computational efficiency of the Lomb-Scargle periodogram compared with FFT? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough assessment of our work and for their useful comments. Below are our detailed responses, we hope they address any remaining concerns. &nbsp; ### Relation To Broader Scientific Literature (Novelty) We appreciate the presented references and recognize that the Lomb–Scargle periodogram has a longstanding history in signal processing and time series analysis. Our goal is not to claim it as a new method for time series, but rather to show how integrating a differentiable Lomb–Scargle operator into a diffusion-based architecture opens up new possibilities for modern machine learning pipelines. Unlike classical uses of Lomb–Scargle (e.g. for detecting periodicities in irregular or biological signals), our approach is the first to incorporate it into a trainable model. This enables gradient-based optimization of spectral features, allowing the network to learn periodic structure from irregular data in a way that has not been previously explored. We have also released our differentiable Lomb–Scargle implementation to encourage broader adoption of frequency-based conditioning in irregular time-series tasks. We will revise the manuscript to cite these prior works and to clarify the distinct contributions of our approach. &nbsp; ### Weaknesses (Table 2 Results) The results in Table 2 indicate that our method outperforms all other baselines, including CSDI, across all real datasets and evaluation metrics. However, CSDI consistently ranks second, with varying margins with respect to LSCD. A follow-up analysis using a one-sided Wilcoxon signed-rank test suggests that the improvement over CSDI is statistically significant with only moderate confidence ($p=0.065<0.1$). When combining both synthetic and real datasets from Tables 1 and 2, our method significantly outperforms CSDI according to a one-sided Wilcoxon signed-rank test ($p < 0.05$ for all metrics), confirming consistent improvements in imputation quality. In the following table we display the average MAE rankings of the methods for Table 1, Table 2 and the aggregate of both tables. Notably, multiple methods outperform CSDI in the synthetic datasets, while our approach remains consistently ranked first. We will incorporate this analysis as part of the supplementary material. **Table R3: Average MAE ranking per table** | | MEAN | LERP | BRITS | GPVAE | US-GAN | TimesNet | CSDI | SAITS | ModernTCN | Ours | |--------|:------:|:------:|:--------:|:--------:|:---------:|:---------:|:-----:|:-------:|:-------:|:-------:| | Table1 | 5.9 | 9.6 | 2.6 | 6.9 | 6.8 | 8.7 | 6.1 | 2.2 | 4.6 | 1.8 | | Table2 | 10.0 | 4.9 | 4.0 | 7.9 | 7.5 | 7.1 | 2.0 | 3.6 | 7.0 | 1.0 | | Table1+Table2 | 7.2 | 8.1 | 3.0 | 7.2 | 7.0 | 8.2 | 4.9 | 2.6 | 5.3 | 1.6 | &nbsp; ### Weaknesses (Theoretical Results) As a starting point, we point out that without the spectral consistency loss, our setup is similar to [1]. The authors condition on the high and dominant frequency components of the FFT of the observed time series, whereas our technique conditions on the Lomb-Scargle periodogram. In [1], a theoretical result is presented that shows that the conditional entropy of the diffusion reverse process given the frequency representation of time series is strictly less than the conditional entropy of the reverse diffusion process without the frequency information: $$ \mathbb{H}({\bf z}|\hat{\bf X}_t, {\bf X}^{\bf C}, {\bf C}^{\bf H}, {\bf C}^{\bf D}) < \mathbb{H}({\bf z}|\hat{\bf X}_t, {\bf X}^{\bf C}), $$ where ${\bf z} = \hat{\bf X}_{t-1}$, ${\bf X}^{\bf C}$ denotes the time-series observation condition, and ${\bf C}^{\bf H}$ and ${\bf C}^{\bf D}$ denote the high-frequency and dominant-frequency condition, respectively. The essence behind this theoretical result is that incorporating additional conditional information reduces the entropy of the reverse process. This result extends to *any* conditional information, which means that if we condition on the Lomb-Scargle representation of the time series, it should also reduce the conditional entropy of the diffusion process. In the revision of our paper, we will include this analysis as a theoretical insight of the method. [1] X. Yang et al. *Frequency-aware Generative Models for Multivariate Time Series Imputation*. NeurIPS, 2024. &nbsp; ### Question (Computational Efficiency) We have included a detailed analysis of computational efficiency in the response to Reviewer 3 (**ZpA3**). --- Rebuttal Comment 1.1: Comment: Thanks for the response. Incorporating a Lomb–Scargle layer into an existing diffusion model (I assum it is CSDI) is not novel enough. And the code for implenmenting Lomb–Scargle layer via Pytorch is not hard to find. I believe that the Lomb–Scargle layers can be replaced by other silmilar types of layers, such as FFT or Koopman.Therefore, my suggestion is that the authors should explore not only the emperical but more theretical aspects of the approaches. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback, we address the individual comments below. > Incorporating a Lomb–Scargle layer into an existing diffusion model (I assum it is CSDI) is not novel enough. And the code for implenmenting Lomb–Scargle layer via Pytorch is not hard to find. At the time of submission, we were unable to find any publicly available PyTorch implementation of Lomb–Scargle. Our implementation provides a compact and efficient layer that supports batching, masked inputs, and False Alarm Probability (FAP) estimation, which are essential for stable training and practical use in deep learning pipelines. To the best of our knowledge, we are the first to incorporate Lomb-Scargle as a layer in a differentiable training workflow. Moreover, we believe that our approach addresses an important problem that has not been previously addressed: how to reliably incorporate spectral information into training in scenarios with incomplete or irregular time series. We believe that making this code publicly available will constitute a valuable contribution to the community. > I believe that the Lomb–Scargle layers can be replaced by other silmilar types of layers, such as FFT or Koopman. While FFT is often used in deep learning pipelines, it implicitly assumes uniform sampling. Irregularly sampled data or missing values typically require zero-padding or interpolation, which can distort spectral estimates as shown in Figure 1. In contrast, Lomb-Scargle is specifically designed to handle irregular or incomplete time series data [1]. Replacing the LS layer for FFT would not be appropriate in the case of time series imputation, or in other learning tasks where the input time series has missing values or is irregularly sampled. Similarly, Koopman methods for irregular or incomplete time series require interpolation of the data, to a regular grid or a complex continuous-time representation [2]. [1] J. VanderPlas. Understanding the Lomb–Scargle Periodogram. The Astrophysical Journal (2018). [2] I. Naiman et al. Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs. ICLR (2024).
Summary: This paper introduces Lomb–Scargle Conditioned Diffusion (LSCD), an approach for irregularly sampled time series imputation. Unlike traditional frequency-domain methods that rely on the Fast Fourier Transform (FFT), which assumes uniform sampling and requires interpolation, LSCD leverages the Lomb–Scargle periodogram to handle missing data directly in the frequency domain. The method integrates a score-based diffusion model conditioned on the Lomb–Scargle spectrum, ensuring that the imputed time series aligns with its true spectral content. To enhance performance, LSCD employs a spectral encoder and a spectral consistency loss, reinforcing the coherence between imputed series and their frequency representation. Experiments on synthetic sine wave data and two real-world datasets, PhysioNet ICU patient records and PM2.5 air quality data, demonstrate that LSCD outperforms existing baselines in both time-domain accuracy (MAE, RMSE) and spectral preservation (S-MAE). Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No proofs in the paper Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The study highlights the potential of Lomb–Scargle-based spectral conditioning in machine learning applications involving incomplete time series. Essential References Not Discussed: Key related works on irregularly sampled time series, particularly those for regression tasks, are missing. Other Strengths And Weaknesses: Strengths: 1. LSCD elegantly integrates the Lomb–Scargle periodogram into a diffusion model, allowing it to directly condition time-domain imputation on spectral information without requiring interpolation or zero-filling. This fusion preserves the frequency structure of irregularly sampled data while leveraging the powerful generative capabilities of diffusion models. 2. By providing a differentiable implementation of the Lomb–Scargle periodogram, LSCD enables end-to-end learning, making it seamlessly compatible with modern deep learning frameworks and adaptable to various time series tasks with missing or irregularly sampled data. 3. LSCD consistently outperforms both FFT-based and diffusion-based imputation methods by achieving lower MAE, RMSE, and better spectral fidelity (S-MAE). Weaknesses: 1. This paper lacks discussion on related works addressing irregularly sampled time series, particularly for regression tasks such as imputation and forecasting [1-9] (a representative but non-exhaustive list). The authors should elaborate on the differences and advantages of their approach over these prior works and compare it with some state-of-the-art algorithms among them. 2. This work lacks an efficiency evaluation of the proposed model, which is crucial for practical applicability and broader usage. [1] Latent ODEs for Irregularly-Sampled Time Series. NeurIPS, 2019. [2] GRU-ODE-Bayes: Continuous modeling of sporadically-observed time series. NeurIPS, 2019. [3] Neural Flows: Efficient Alternative to Neural ODEs. NeurIPS, 2021. [4] Multi-time attention networks for irregularly sampled time series. ICLR, 2021. [5] Modeling Irregular Time Series with Continuous Recurrent Units. ICML, 2022. [6] Neural Continuous-Discrete State Space Models for Irregularly-Sampled Time Series. ICML, 2023. [7] Modeling Temporal Data as Continuous Functions with Stochastic Process Diffusion. ICML, 2023. [8] GraFITi: Graphs for Forecasting Irregularly Sampled Time Series. AAAI, 2024. [9] Irregular Multivariate Time Series Forecasting: A Transformable Patching Graph Neural Networks Approach. ICML, 2024. Other Comments Or Suggestions: 1. There is a typo: ‘Ls’ in the equation in Section 4.2. Questions For Authors: 1. Can this model impute measurements at any timestamp even without mask placeholders? Given the same observed time series, does the number of mask placeholders influence the imputation results for a specific timestamp? 2. What are the limitations of the proposed model? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough and positive assessment of the manuscript. We appreciate the recognition of the relevance of the work, as well as the modeling and evaluation choices. Below are our detailed responses, we hope they address any remaining concerns. &nbsp; ### Weakness 1 (Related Work) We appreciate these valuable references in continuous-time modeling and specialized architectures for irregularly sampled data. In the revision of our paper, we will include an additional discussion on the comparison of score-based diffusion-based approaches and some of the aforementioned continuous-based approaches specifically w.r.t. the task of probabilistic time-series imputation. With respect to latent ODE/SDE models, a variational framework is typically used to train the model, and conditioning on a set of observed time-points to obtain the approximate posterior enables imputation by a forward pass of the learned model (filtering). More advanced variations, such as [6], perform an additional backward pass to obtain more accurate imputations (smoothing) based on all observed information. On the other hand, methods like CSDI and our method pre-define a fixed-window size for the generated time series and explicitly learn the conditional distribution $p(\mathbf{X}^{\text{mis}} | \mathbf{X}^{\text{obs}}, \mathbf{C}^{\text{obs}})$, where $\mathbf{X}^{\text{mis}}$ denotes the missing values we are trying to impute, $\mathbf{X}^{\text{obs}}$ denotes the observed samples at the time of imputation, and $\mathbf{C}^{\text{obs}}$ denotes a conditioning variable that provides additional information (based on $\mathbf{X}^{\text{obs}}$) to enhance the imputation of $\mathbf{X}^{\text{mis}}$. Examples in training are created using a mask. At inference time, filtering and smoothing are not used to impute, but a simple forward pass through the diffusion model with appropriate conditioning based on the observed values. In addition to a deeper discussion of these models, we plan to include new experimental comparisons with a subset of these continuous-time approaches (e.g. Latent ODEs [1] or DSPD-GP [7]) in the final version of the paper as additional results. This will help illustrate how the LS-based diffusion framework fares relative to advanced continuous-time baselines, highlighting both advantages (e.g. explicit spectral preservation) and trade-offs (e.g. requiring a pre-defined grid). &nbsp; ### Weakness 2 (Efficiency Analysis) Please refer to the response to Reviewer 3 (**ZpA3**) for a detailed analysis of computational efficiency of our method. &nbsp; ### Question 1A (Arbitrary timestamp imputation) LSCD, like most score-based diffusion models such as CSDI, operates on a fixed time grid and requires explicit placeholders defined by a mask to know where to perform imputation. It does not generate a continuous function that can be queried at any arbitrary timestamp. In contrast, models like Neural ODEs learn continuous latent trajectories that can be evaluated at any point in time, even those not seen during training or inference. &nbsp; ### Question 1B (Effect of number of mask placeholders) Our model, like CSDI, performs joint conditional imputation over masked timestamps, and thus the prediction at a given timestamp may depend on the number and location of other masked points. However, we additionally condition on the spectrum of the observed time series, which provides global frequency information that can help stabilize the imputation process. &nbsp; ### Question 2 (Limitations) Our approach presents the following limitations: - **Irregularly Sampled Data:** Our method supports missing time-steps but relies on a regular time grid. It also supports irregular sampling, but this requires upsampling to a fine-grained grid, which may be computationally expensive. - **No Continuous-Time Output:** Unlike neural ODEs, LSCD does not output a continuous function over time. This may limit its applicability in tasks requiring continuous-time interpolation or forecasting. - **Frequency Grid Dependence:** The model conditions on a fixed set of frequencies. An improper grid choice could lead to suboptimal conditioning, although this is mitigated by the spectral encoder and FAP-based filtering. We will incorporate the discussion of the limitations of our approach in the manuscript. --- Rebuttal Comment 1.1: Comment: Based on the response regarding Q1 and Q2, this work fails to handle the imputation task for general irregularly sampled time series (i.e., interpolation in continuous time). I think the use of the term 'Irregular Time series Imputation' in this paper is not rigorous as Irregular Time series is not equivalent to time series with missing data. --- Reply to Comment 1.1.1: Comment: Thank you again for your thorough analysis of our method. We would like to provide additional clarifications regarding its limitations and how it differs from continuous-time approaches. While score-based diffusion models such as CSDI and LSCD rely on a fixed time grid, they can still handle the interpolation of irregularly sampled series. As an example, the authors of CSDI show results on irregular time series interpolation in Section 6.2 of the paper, where they compare with two continuous-time baselines (Latent ODEs [1] and mTAN [2]). Furthermore, Lomb-Scargle natively supports irregularly sampled data, hence it can be integrated in both grid-based and continuous-time approaches seamlessly. However, grid-based methods such as LSCD and CSDI only support irregularly sampled time series **provided that the interpolation time points are known at training time**. In contrast, continuous-time methods only require knowledge of the interpolation time points at inference time. We have included a discussion of this point in the text, highlighting the advantage of continuous-time methods. In Table R4, we present preliminary results for irregularly sampled time series interpolation, comparing LSCD with CSDI and two continuous-time baselines (Latent ODEs [1] and mTAN [2]). We follow the experimental setup from Section 6.2 in CSDI. Results show that LSCD handles irregular sampling effectively, with moderate improvements over CSDI, the second-best performing model. However, we acknowledge that these experiments are comparatively limited in scope, and the more extensive evaluations in Tables 1 and 2 showcase results on time series data with missing values rather than fully irregular data. Accordingly, we have removed the term "irregular" from our paper’s title, as suggested by Reviewer 4 (y6X7), and carefully revised the manuscript to clarify that we primarily address missing data rather than irregularly sampled time series. **Table R4: Results on irregularly sampled time series interpolation.** | Metric | LatentODE*| mTAN* | CSDI | LSCD | |--------------|-----------|--------|----------|--------| | MAE 10% | 0.522 | 0.389 | 0.371 | 0.281 | | RMSE 10% | 0.799 | 0.749 | 0.798 | 0.528 | | MAE 50% | 0.506 | 0.422 | 0.387 | 0.382 | | RMSE 50% | 0.783 | 0.721 | 0.687 | 0.672 | | MAE 90% | 0.578 | 0.533 | 0.543 | 0.545 | | RMSE 90% | 0.865 | 0.836 | 0.851 | 0.850 | (*) Values obtained from Tashiro et al. (2021) [3] [1] Latent ODEs for Irregularly-Sampled Time Series. NeurIPS, 2019. [2] Multi-time attention networks for irregularly sampled time series. ICLR, 2021. [3] CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation. NeurIPS, 2021.
null
null
null
null
null
null
Learning Parametric Distributions from Samples and Preferences
Accept (spotlight poster)
Summary: This paper studies the conditions under which preference feedback improves parameter estimation. The authors show that preference-based estimator can achieve a better assympototic variance than sample-only estimators. When incorporated with hard constraints with deterministic preference, the authors prove an estimation error of $\mathcal{O}(1/n)$, improving upon traditional rate $\mathcal{O}(1/\sqrt{n})$, under some restrictive assumptions. They also develop a matching lower bound. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: Yes, in Section 6 Supplementary Material: Yes, the major framework of proof Relation To Broader Scientific Literature: It is generally related to machine learning community Essential References Not Discussed: No Other Strengths And Weaknesses: ## Strengths The paper studies a very interesting problem, i.e. how does preference labels improve statistical estimation. The theoretical results are solid and also surprising overall, providing important insights on the benefits of preference. ## Weaknesses As mentioned by authors, the assumptions are quite restrictive and are only verified under simple setup. This is not a big issue, though. Other Comments Or Suggestions: see Strengths And Weaknesses part above Questions For Authors: ### 1. Do the results in Section 4 rely on the specific reward model $r_\theta$? Does it have to be the log likelihood $\log p_\theta$? If not, can the authors provide some examples satisfying all the assumptions while supporting general reward functions? ### 2. Can the authors provide more high-level intuitions behind the accelerated rate of $\mathcal{O}(1/n)$? For examples, when the reward model is just log likelihood, the deterministic preference label only provides additional information on the magnitude relationship between $\log p(x),\log p(y)$. Why is this sufficient to improve the estimation error? ### 3. Is it possible to consider misspecification case, i.e., $\theta_*\not\in \Theta$? In this case, $\hat{\theta}$ should converge to an optimal estimator in $\Theta$. Would similar acceleration effects hold? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer yquN for the time spent and the positive feedback. We address the reviewer’s questions below. **1. Reward models** Except for Theorem 4.3, all the derivations in Section 4 hold for general (hence reward-based) preference models provided Assumptions 4.2,4.4,4.5 and 4.7 hold. Characterizing the expressivity of parametric rewards satisfying those assumptions is interesting, yet challenging. We provide two positive and one negative examples. - Positive: monotonic reward. Suppose that $\tilde \ell_{\theta}(x,y) = f(p_{\theta}(x)) - f(p_{\theta}(x))$ where $f$ is increasing on $[0,1]$. Since $sign(\tilde \ell_{\theta}) = sign(\ell_{\theta})$, hence the parameters with zero classification loss and our estimators are the same. Therefore, our results hold for this class of rewards when our assumptions hold for the log-likelihood reward. When $f$ is decreasing, the preferences are “reversed”, and similar arguments can be made. This example includes (1) normalization by a multiplicative constant (e.g., temperature $\beta$) and (2) the odds-ratio reward-based preference based on $f(x) = \log(x/(1-x))$ and defined by Hong et al. (2024, ORPO: Monolithic Preference Optimization without Reference Model). - Positive: margin with Gaussian. Suppose that $\tilde \ell_{\theta} = \ell_{\theta} + c$ where $c$ is a constant and $\ell_{\theta}$ is the Gaussian log-likelihood preference. By extending our computations from Appendix E, Assumptions 4.2, 4.4, 4.5, and 4.7 hold with $c$-dependent positive constants. Margins are used by Meng et al. (2024, SimPO: Simple Preference Optimization with a Reference-Free Reward) and IPO from Azar et al. (2023, A General Theoretical Paradigm to Understand Learning from Human Preferences). - Negative: reference model with Gaussian. Suppose that $\tilde \ell_{\theta} = \ell_{\theta} - \ell_{\theta_0}$ where $\theta_0$ is known and $\ell_{\theta}$ is the Gaussian log-likelihood preference. Since $\tilde \ell_{\theta}(x,y) = \langle x-y, \theta - \theta_0 \rangle$ and $\nabla_{\theta} \tilde \ell_{\theta}(x,y) = x-y$, Assumption 4.5 is violated for $u=\theta^\star - \theta_0$. Not all direct alignment algorithms rely on a reference model (see SimPO or ORPO). **2. Accelerated rate** Accelerated rates arise when accumulating random variables having a positive density at a specific point through a minimum (or maximum) operator. - When estimating the location parameter $\theta$ of a uniform distribution over $[\theta,\theta+1]$, the optimal estimator achieving the accelerated rate is the minimum of uniform observations whose density is positive at $\theta$. - For deterministic preferences with log-likelihood rewards, we observe the true ordering between likelihoods. This enforces a hard constraint on the admissible parameters, which can be expressed with a minimum operator. More precisely, the maximal deviation $R_{n,u}$ along direction $u$ is upper bounded by the minimum of positive random variables whose density is positive at zero under Assumption 4.7. With high probability, this min operator is upper bounded by $O(1/n)$. The proof combines Lemma 4.6 and an upper bound on the inverse of the cdf based on a Taylor expansion around $0$. **3. Misspecification** Under misspecification, the deterministic preferences might not provide separability within $\Theta$ since $\theta^\star \notin \Theta$. Then, DP MLE should be defined as SP MLE by using the 0-1 loss $1(u < 0)$ instead of the logistic loss $-\log \sigma(u)$. This combines a cross-entropy loss and a classification 0-1 loss, reweighted by a regularization $\lambda > 0$. This objective is reminiscent of single-stage alignment procedures such as ORPO and ASFT, see Gorbatovski et al. (2025, The Differences Between Direct Alignment Algorithms are a Blur). Without separability, computing DP MLE can be NP-hard. Under sufficient regularity, DP MLE converges to $\theta_{0} \in argmin_{\theta \in \Theta} KL(\theta^\star,\theta) + \lambda m(\theta)$ where $m$ as in line 270, where $\theta_0 \ne \theta^\star$. This minimization is challenging, as $\theta \to m(\theta)$ might not be convex. Deriving a tractable ELBO method for this optimization is an interesting direction to obtain tractable and robust estimators. As $\theta_0$ lies in the boundary of $\Theta$, we should control the maximal deviation wrt to $\theta_0$ for directions that point towards the interior of $\Theta$ to prove an accelerated rate. While some elements of our analysis might be salvaged, we believe that finer technical arguments should be derived to capture this interesting setting. **Restrictive assumptions** See the answer to Reviewer vgrP for a detailed discussion on how to weaken them to local conditions. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' feedback. I keep my recommendation for acceptance.
Summary: This paper studies when adding preference feedback can boost the parameter estimation in the cases of Gaussian and Laplace distributions. The results are mainly theoretical, containing three parts: (1) For M-estimators, adding an additional ``preference'' term related to the logarithm of probability helps reduce the asymptotic covariance; (2) For estimators based on hard preference constraints, the error converges at a rate of $\mathcal{O}(1/n)$ with high probability; (3) This rate is mini-max optimal up to dimension and problem-dependent constants, using Assaud's Lemma. Claims And Evidence: Most of the results are theoretical and supported by proofs. The assumptions are satisfied by the Gaussian or the Laplace distributions. Methods And Evaluation Criteria: The estimators in this paper are mostly theoretical. SO and SP are typical M-estimators. AE and DP require (1) solving a feasibility problem, which can be NP-hard for general cases; and (2) the hard preference assumption (analogous to the feasibility condition). I don't think this could be the real case. Theoretical Claims: I checked the proof sketch. It makes sense to me. Experimental Designs Or Analyses: I checked the experiment results. As I mentioned before, this paper is theoretical. Yet there are still some minor issues. 1. It seems a little weird that any randomized estimator in $\mathcal{C}_n$ (RU) outperforms the one that maximizes the log-likelihood in $\mathcal{C}_n$ (DP) in Figure 1(a). 2. The legend doesn't match the figure's line style. Supplementary Material: No. Relation To Broader Scientific Literature: This paper provides a new perspective in analyzing the role of preference. This paper's case is not the same as the human preference alignment (e.g. RLHF): the paper is studying an estimator ``plus'' some preference data as an additional source of information, while RLHF or DPO is trying to learn something from only the preference data, implying that the paper may be of limited value to the LLM literature. However, this paper still provides some interesting observations, which might be of interest to the community of statistics. Essential References Not Discussed: No (as far as I know). Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer Gth4 for the time spent and the encouraging feedback. We address the reviewer’s questions below. **Iterative human preference alignment** We investigate the case where pairs of observations and their preferences are tied together, which includes the log-likelihood ratio as preference. We detail the connections with iterative human preference alignment. - Many human preference alignment methods build on the Bradley-Terry (BT) model for preference, based on rewards. Direct alignment algorithms use variants of the log-likelihood to define the implicit reward of a policy. Choosing $\ell_{\theta}(x,y) = \log p_{\theta}(x) - \log p_{\theta}(y)$ coincides with the optimal policy for maximum entropy RL (see, e.g., Swamy et al, 2025, All Roads Lead to Likelihood: The Value of Reinforcement Learning in Fine-Tuning). - For offline preference data, the assumption $(X,Y) \sim p_{\theta^\star}^{\otimes 2}$ is unrealistic as $\ell_{\theta^\star}$ is collected from a fixed data set of pairs of observations. Recent LLMs are built on iterative alignment procedures. At stage $N$, the model $p_{\theta_N}$ is trained based on the preference data for generations by the previous model, i.e., $(X,Y) \sim p_{\theta_{N-1}}^{\otimes 2}$. Under the realizability assumption and without mode collapse, this self-refinement paradigm converges towards the true model $p_{\theta^\star}$. Our setting characterizes the limiting behavior of this iterative process, i.e., preference based on $\ell_{\theta^\star}$ for observations from $p_{\theta^\star}$. **1. RU versus DP** Figure 1(a) provides evidence suggesting that the randomized estimator (RU) and the worst-case estimator (WE) perform on par with DP MLE: RU is slightly better than DP, itself slightly better than WE. Figures 1(b) and (2) highlight that DP outperforms WE and AE for larger dimensions, where the gap increases when $d$ is nonnegligible compared to $n$. Therefore, only DP obtains the best-of-both world estimation error rate. For large $d$, implementing RU is challenging. We conjecture it suffers from the same limitation as AE. This is supported by additional experiments on new estimators using the setting of Section 6, see anonymous plots at https://anonymous.4open.science/r/ICML25SuppExp . - Figure 1(a) extended. The center estimator (CE) returns the center of the interval $\mathcal C_n$. The truncated Gaussian estimator (TrG) returns a realization from a Gaussian distribution with mean CE and variance $4/n$, which is truncated to $\mathcal C_n$. TrG performs on par with RU. CE outperforms both TrG and RU. This suggests that being far away from the boundary of $\mathcal C_n$ improves performance compared to DP that lies on the boundary of $\mathcal C_n$ as observed empirically. Moreover, randomization on $\mathcal C_n$ worsens performance compared to CE. Using the derivation in lines 55-63, it is coherent that CE improves on DP by a multiplicative constant: the average of those two (non-independent) random variables decreases faster. This can be proved by refining the proof of Lemma 4.6 to account that $n = N_{\theta^\star,-1} + N_{\theta^\star,1}$ (defined in Line 658). - Figures 1(b) and 2 extended. For $d>1$, multiple centers exist and we use the Chebyshev center estimator (CCE) of $\mathcal C_n$. While CCE outperforms AE by a constant margin, CCE only outperforms DP in the regime of large $n$ compared to $d$. It performs worse than SO for small $n$. Geometrically, for small $n$ and large $d$, the random polytope $\mathcal C_n$ is more likely to be “spiky” along some directions. Due to those distant vertices, the center becomes a worse estimator than DP, since the “average” is intuitively less robust to outliers. In contrast, DP dominates SO statistically (Lemma 4.1), hence it achieves rate $O(\sqrt{d/n})$ when $n$ is small compared to $d$. **2. Line style** The dashed line is shorter than the solid line, see SP (sto) and SO, yet others are not distinguishable. We will correct this.
Summary: The paper provides a set of estimators and conditions to improve the estimation error in learning the parameters of continuous parametric distributions when additional preference feedback is available. More concretely, the question is the following: For a continuous parametric distribution $p\_\theta$ with i.i.d. samples $\{(x\_i, y\_i)\}_i$ and a known reward function $r\_\theta$, how/when does including noisy/deterministic preferences $z\_i \propto r\_\theta(x\_i) - r\_\theta(y\_i)$ improve the estimation error of the parameter $\theta$? To answer the above question, the authors first leverage the asymptotic theory of M-estimators, showing that a maximum-likelihood estimator (MLE) that takes preference data into account has the same standard error rate as sample-only estimators, i.e., $\Theta(\frac{1}{\sqrt{n}})$, while achieving a potentially improved asymptotic variance for noisy preferences and a further improved variance for deterministic preferences. For deterministic preferences, they take a further step and provide another estimator: an MLE with the hard constraints given by preferences. They then make several assumptions on $p_\theta$ and $r_\theta$ to show that this new estimator can achieve an accelerated error rate of $\mathcal{O}(\frac{1}{n})$ compared to the standard $\Theta(\frac{1}{\sqrt{n}})$. In particular, they show that Normal and Laplace distributions with log-probability reward functions satisfy these assumptions. Finally, they prove that the rate of $\mathcal{O}(\frac{1}{n})$ is minimax optimal up to problem-specific dimensions and logarithmic factors. Toy experiments on a multivariate Normal distribution are provided to support the theoretical findings. Claims And Evidence: The paper is a theory paper, where all its theoretical claims have been rigorously proved under the stated assumptions. The authors do not overstate their contributions and clearly acknowledge the limitations of the work (e.g., the restrictiveness of the assumptions). Moreover, the toy experiments in Section 6 are consistent with and provide empirical support for the theoretical claims established in the previous sections. Methods And Evaluation Criteria: The paper primarily contributes to the theoretical aspects of preference learning. The main methodology described is the deterministic preferences MLE (DP-MLE) presented in Section 4, which uses a 0-1 loss to constrain the set of feasible parameters based on the implicit assumption that reward models are well-specified. This approach makes sense if there are good reasons to believe the reward model is indeed well-specified. However, I do have some concerns about the assumptions used in the analysis, which I will outline in the following sections. Theoretical Claims: The paper has several theoretical claims: 1. ${\color{green}\text{Lemma 3.1}}$ and Lemma 3.2 on the asymptotic variance of the preference-based M-estimators. 2. ${\color{green}\text{Lemma 4.1}}$ on the benefits of using their proposed constraint-based estimator compared to M-estimators for Normal distributions. 3. ${\color{green}\text{Theorem 4.8}}$ (including ${\color{green}\text{Lemma 4.6}}$) and its corollary ${\color{green}\text{Theorem 4.3}}$ on proving the accelerated rate of $\mathcal{O}(\frac{1}{n})$ for their proposed estimator. 4. Theorem 5.3 (including Lemma 5.1) on the estimation lower bound for the deterministic feedback case. 5. Proving that the Normal and Laplace distributions satisfy Assumptions 4.2, 4.4, 4.5, 4.7, and 5.2, which are necessary for the theoretical claims in the paper (Appendices E, F). I have only checked the correctness of the results shown in ${\color{green}\text{green}}$ and did not find any issues. Experimental Designs Or Analyses: As far as I can tell, the experiment section presents a toy multivariate Gaussian setting with the sole purpose of supporting the theoretical findings. The code is provided, but I did not verify it directly. However, the experimental results appear sound and provide appropriate empirical support for the theoretical claims. Supplementary Material: I have reviewed the proofs provided in Appendix B (except for section B.2) and Appendix C. I did not review the results in Appendices D, E, and F (corresponding to the theoretical results I have not proof-checked). Relation To Broader Scientific Literature: The contributions are directly related to the empirical success of preference-based fine-tuning of large language models through methods like RLHF, compared to methods that only rely on positive examples such as supervised fine-tuning. In this context, the work attempts to develop a deeper theoretical understanding of when such preferences can help improve learning, using a simplified parametric setting. The ideas presented in Section 3, regarding the effect of adding preferences to standard M-estimators, are primarily based on the well-established asymptotic normality theory of parametric MLE. The authors apply similar techniques and tools to calculate the Fisher information matrix in the preference-based setup and investigate the conditions under which it can be strictly more informative than the standard M-estimator. The results in Section 4, however, appear more novel and rely on the hard constraints imposed by deterministic preferences. The authors have appropriately discussed previous related results that use similar hard constraints to achieve better estimation error: for example, the parameter estimation of a uniform distribution on $[\theta, \theta + 1]$ by taking the minimum of samples (Wainwright, 2019), which has a known minimax rate of $\Theta(\frac{1}{n})$. Essential References Not Discussed: As far as I know, the essential references have been discussed. Other Strengths And Weaknesses: **+ Soundness, Novelty, and Technical Contributions** All assumptions are concretely specified, and I find the technical contribution around achieving the accelerated rate of $\mathcal{O}(\frac{1}{n})$ both interesting and novel. While I have concerns about the restrictiveness of the assumptions, the authors demonstrate that both Normal and Laplace distributions satisfy these assumptions, which can be seen as a meaningful degree of practicality. All results are well-supported by rigorous proofs and tested by the toy experiments. **- Quality of Presentation** The presentation of the work could be significantly improved. The current writing, especially in Sections 1 to 3, reads more like a collection of independent chunks of information without a cohesive story connecting them. I believe the authors have developed several ideas and attempted to articulate them, but in doing so, they relied on implicit contextual understanding that isn't provided in the text. I would suggest approaching the writing from the perspective of a reader encountering the paper with no prior knowledge of the work and providing sufficient context throughout. Additionally, in Section 2, definitions and motivations are sometimes intermingled, making it unclear what constitutes a formal definition versus what serves as intuitive examples. The following concrete instances illustrate these issues, though they are more cases: 1. I needed to read the entire paper first to understand the paragraph in lines 57-63 about how hard constraints can help achieve better estimation error for a standard normal distribution. The presentation would benefit from more context regarding what is known and not known by the estimator about the estimation task, what the parameter of interest actually represents, and why $S_i$ is defined in this way. 2. In Section 2, the paragraph about *informative preferences* (lines 137-150) defines the two sets $\mathcal{G}_0$ and $\mathcal{G}_1$ based on a vague notion of "informativeness" without providing context for why one might be interested in samples with non-zero preference gradients. What does it mean to say "only preferences of samples in $\mathcal{G}_1(\theta^\star)$ can provide information on $\theta^\star$"? Why is $\mathcal{G}_0$ defined if informativeness is only based on $\mathcal{G}_1$? 3. Also in Section 2, the paragraph on *negative examples* (lines 152-164) is extremely unclear. This paragraph could be placed anywhere in the paper without affecting the overall narrative. The claims lack concreteness, and no proof or proof sketch is provided. **- Significance of the Results for the Community** The main limitation of this work is perhaps its overly restrictive set of assumptions, which may limit its significance and applicability in the broader community, particularly in preference learning. Reading the first lines of the abstract, It seems that the authors motivate the applicability of their theory based on advances in preference learning for language models. While the authors acknowledge the restrictiveness of their assumptions, they rely on their results showing that Normal and Laplace distributions satisfy these assumptions to claim broader applicability. However, I do not believe that merely demonstrating compliance with these assumptions for Normal/Laplace distributions is sufficient to establish applicability in more complex scenarios like preference learning in language models. My concerns are twofold: 1. The deterministic method (DP-MLE) with 0-1 loss in Section 4 relies on the well-specification of the reward (preference) model. Although standard asymptotic theory for M-estimators also assumes well-specification of the parametric model class, there seems to be a big difference. In standard MLE, if the model is misspecified, one can employ quasi-MLE to obtain robust estimation (see [1]). However, in the deterministic case, if the reward model is misspecified, the constraint set $\mathcal{C}_n$ may not necessarily converge to a set containing the true parameter $\theta^\star$ as $n \to \infty$. Since DP-MLE is constrained to $\mathcal{C}_n$, I suspect it could yield arbitrary estimates under misspecified models and lack robustness. This is particularly concerning given that model misspecification is almost always a possibility, especially when dealing with human annotators, where reward models are known to be misspecified [2]. 2. Even assuming correctly specified models, the paper provides no recipe to verify whether Assumptions 4.4, 4.5, and 4.7 hold for a given parametric model. These assumptions appear extremely difficult to check for arbitrary parametric models. The authors devote six pages of mathematical derivations just to prove them for the relatively simple cases of Normal and Laplace distributions. **References** [1] White, Halbert. "Maximum likelihood estimation of misspecified models." Econometrica: Journal of the econometric society (1982). [2] Casper, Stephen, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman et al. "Open problems and fundamental limitations of reinforcement learning from human feedback." arXiv preprint arXiv:2307.15217 (2023) Other Comments Or Suggestions: 1. The definition of 0-1 loss for stochastic preferences in lines 230-231 is unclear. I suggest the authors clarify what this means exactly and explicitly state why minimizing such a loss is NP-hard. 2. Theorem 4.3 is presented as a corollary of the main Theorem 4.8, yet it appears in an earlier section. This may cause confusion for readers. I suggest the authors first state Theorem 4.8 and then present the corollary specifically for Normal/Laplace distributions to improve logical flow. 3. The proof of Theorem 4.8 could be easier to grasp if the authors provided some intuition behind the definition of $V_{\theta^\star, u}$ on line 302. 4. There appears to be a typo in lines 431-432. Questions For Authors: Despite the limitations I mentioned, I still think the paper has the potential to be accepted at the conference. The deciding factor for me is the authors' response to the limitations I highlighted under "Significance of the Results for the Community" in the weaknesses section. Could the authors elaborate on the implications of model misspecification in DP-MLE and also explain how one can verify assumptions 4.4, 4.5, and 4.7 for realistic models? I don't necessarily expect proof that DP-MLE is robust to misspecification, but I would expect at minimum an acknowledgment of this as a major limitation of the work. *Minor Questions:* 1. In Figure 1.a, how does the RU method outperform DP? This seems somewhat counter-intuitive. Could you elaborate on this observation? 2. In Section 6, what is the goal and implication of including the paragraph about covariance gap starting in line 434? It seems disconnected from the other points in the experiment section. Could you provide more context for its relevance? 3. How do you envision applying the DP-MLE method for preference learning in realistic language model training? Can you provide at least an outline of when/how this approach might be feasible in practice? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer vgrP for the time spent and the detailed comments. Due to the limited space, we only address some of the reviewer’s concerns. **Restrictive assumptions** While our research question is inspired by iterative human preference alignment (see the answer to Reviewer Gth4), we do not claim the direct applicability of DP MLE for realistic LLM training. When studying DP MLE only, we conjecture that the “global” assumptions 4.2 and 4.4 can be weakened to local versions. Using time-uniform concentration results, we can build a sequence of shrinking confidence regions $(R_n)_n$ around SO MLE that contains $\theta^\star$ for all time $n$ with high probability (whp). Then, we modify DP MLE to be constrained on $R_n \cap C_n$ that contains $\theta^\star$ whp. For $n$ large enough, $R_n \cap C_n$ will be included in a local neighborhood of $\theta^\star$ under which the “local” assumptions 4.2 and 4.4 are satisfied. Given that Assumption 4.4 is based on “ignoring” the reminder term in a first-order Taylor expansion, assuming a local version is a significantly weaker requirement. **1. Misspecification** There are two possible sources of misspecification not taken into account by our current analysis. - Misspecified observations. See answer to Reviewer yquN when $\theta^\star \notin \Theta$. When $p^\star \notin F$, $L_n(\theta)$ is a quasi-log-likelihood term, as $F$ doesn’t contain the true structure. Under sufficient regularity, SO quasi-MLE converges towards $\theta_{0} \in argmin_{\theta \in \Theta} KL(p^{\star}, p_{\theta})$ where $ p^\star \ne p_{\theta_0} \in F$. Without the separability from well-specified deterministic preference, we define DP quasi-MLE as in the answer to Reviewer yquN. Under sufficient regularity, this estimator converges towards the minimizer of a similar optimization problem based on the above KL and a misspecified equivalent of $m(\theta)$. - Misspecified preferences. The Bradley-Terry (BT) model that uses reward-based preferences has limited expressivity as it doesn’t allow for intransitive preferences. Even when individuals exhibit transitive preferences, their averaged preferences might be intransitive due to disagreements. See Munos et al. (2024, Nash Learning from Human Feedback) or Swamy et al. (2024, A Minimaximalist Approach to Reinforcement Learning from Human Feedback). **2. Verifying our assumptions** In all generality, it is challenging to give a general recipe to formally verify those assumptions. A formal verifier (Lean) or software (SageMath) might be useful given a closed-form definition. Numerically, those assumptions can be confirmed or rejected by sampling from $p_{\theta^\star}^{\otimes 2}$. Assumption 4.4 is rejected by exhibiting $(X_i,Y_i) \in \tilde D(\theta^\star,\theta) \setminus D(\theta^\star,\theta)$. Assumptions 4.2 and 4.5 are confirmed by finding $(X_i,Y_i) \in D(\theta^\star,\theta)$ and $(X_i,Y_i) \in G_{1}(\theta^\star,u)$. The sampling complexity of such tests scales as the inverse event’s probability. Using Dvoretzky–Kiefer–Wolfowitz inequality, $F_{\theta^\star,u}$ can be estimated to verify Assumption 4.7 hold. Our additional experiments with accelerated rate include Laplace and Rayleigh distributions, see the anonymous plots at https://anonymous.4open.science/r/ICML25SuppExp . **RU versus DP** See answer to Reviewer Gth4. **Covariance Gap** Our simulations suggest that the asymptotic gaps between SO and SP are mild. The empirical gap is also small for moderate $n$. **0-1 loss** See answer to Reviewer yquN. For non-separable data, the minimization of the 0-1 classification loss can be NP-hard even for the simple class of linear classifiers, e.g., Feldman et al. (2018, Agnostic Learning of Monomials by Halfspaces is Hard). Inspired by (Tang et al., 2024, Generalized Preference Optimization: A Unified Approach to Offline Alignment), we implement estimators based on other convex surrogates: Hinge, Square, Truncated square, Savage, and Exponential. All estimators perform on par with the logistic loss, see the plot at https://anonymous.4open.science/r/ICML25SuppExp . **Intuition on $V_{\theta^\star,u}$** It quantifies the amount of information in $(X_i,Y_i)$ to discriminate $\theta^\star$ from other parameters on the half-line directed by $u$. The lower $V_{\theta^\star,u}(X_i,Y_i)$ is, the more discriminative $(X_i,Y_i)$ is. **Informative preferences** For observations with null preference gradient, parameters close to $\theta^\star$ could have similar preferences. Therefore, those samples are not sufficient to discriminate between them. **Negative examples** Those claims are a direct consequence of the definitions and will be proved in Appendix for completeness. **Typo** It will be fixed. --- Rebuttal Comment 1.1: Comment: Thanks for clarifying the main concerns. I’ve raised my score based on the additional context you provided. However, I would still like this discussion—at least in part—to be included in the main paper, especially the sections on restrictive assumptions and misspecification. For this reason, I vote for acceptance, conditional on updating the camera-ready paper based on this discussion. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s support for the acceptance of our work based on the additional context. We will use the extra page in the main paper to include these interesting discussions, such as misspecification, verification and relaxation of our assumptions, alternative reward models, and more detailed intuitions. Additionally, we will expand the Appendices with those supplementary experiments and provide detailed proofs to support our added comments.
null
null
null
null
null
null
null
null
Which Agent Causes Task Failures and When? On Automated Failure Attribution of LLM Multi-Agent Systems
Accept (spotlight poster)
Summary: The paper explores automated failure attribution in LLM multi-agent systems. It introduces and formulates a new research of identifying the agent and specific step responsible for task failures within agentic systems. The research introduces the Who&When dataset, which contains failure logs from 127 LLM multi-agent systems, annotated to link failures to particular agents and error steps. The paper evaluates three automated failure attribution methods, demonstrating their strengths and limitations. The best method achieved 53.5% accuracy in identifying failure-responsible agents but only 14.2% in pinpointing failure steps, underscoring the complexity of the task. The authors argue that evaluation and failure attribution should be integrated and that more effort is needed to bridge the gap between evaluation results and failure attribution. They propose leveraging LLMs for automated failure attribution to reduce the need for manual analysis and enable human resources to focus on improving system functionality. Overall, the findings highlight the challenges of using LLMs for failure analysis in multi-agent systems and the need for further research in this area. # update after rebuttal I maintain my score of Accept and think authors discussed an important and interesting topic with proper experimental approach. Claims And Evidence: In general, the paper is well structured and written, and most of the claims are logical and sufficiently substantiated. Main claims are: 1) Automated failure attribution for multi-agent LLM systems is underexplored and novel, but it is important for debugging purposes. Especially with increasing complexity of these systems. Their claim is supported by literature overview demonstrating reliance on human labor that is resource-intensive. The authors further support it with the empirical results of human hours involved in data labelling. 2) A new dataset is constructed and annotated containing logs from 127 multi-agent systems with 184 failure annotation tasks claiming to advance the research in this area. The dataset indeed is thoroughly analysed and used to provide three methodologies applied to it. 3) Current three automated methods achieve modest accuracy (53.5% accuracy at identifying agents responsible for failure and 14.2% at pinpointing exact step), highlighting significant complexity in automation of this task. The experimental evidence is clear, it thoroughly outlines findings that can be logically followed. Somehow problematic claims: The paper explores a relatively new research area, supported by a newly introduced dataset, which contains annotations from 3 annotators and from a single hand-crafted agent system and multiple automatically generated agentic systems. Several findings exclude logs from automated ones (with well justifiable notes but still). This limits the diversity and generalisation raising questions if these results will hold for a broader variety of agentic systems. Saying that, I still think it is important to raise issues and problems but expressing those limitations in the paper will help to caveat it for the audience. The second issue worth raising is the obvious subjectivity in annotation. While authors raise that the consensus was built after, it highlights that the 'ground truth' might be noisy and further highlights that the accuracy results can be unreliable. Methods And Evaluation Criteria: Overall, the approaches and metrics used in the study are appropriate for addressing the complexities and challenges of failure attribution in LLM multi-agent systems. While the results are quite modest, I appreciate the complexity and documentation of those weak results. One issue in the appendix discusses computational costs for various failure attribution methods as mathematical function of the size of the failure log, the token count and error step. However, providing empirical results that demonstrate these costs in practice would substantiate these theoretical calculations. Just as the hybrid method's token expense is discussed in a practical context in the paper, showing how these costs manifest in experiments would be helpful. Theoretical Claims: The paper primarily focuses on empirical evaluations and practical methodologies rather than providing detailed theoretical proofs. Its main contributions involve introducing the Who&When dataset and evaluating three methods for automated failure attribution, highlighting their strengths and weaknesses through empirical results. Given this focus, the paper does not present theoretical proofs that require verification for correctness. Instead, it relies on experimental results to support its claims. The evaluation metrics and empirical analyses serve as the basis for the paper's conclusions. Therefore, there are no theoretical proofs within the paper that need to be checked for correctness. Saying that, there is a problem foundation that introduces mathematical notations for automated failure attribution that I thoroughly checked. One comment: the paper indeed introduces mathematical notations, however, it's practical utility in the paper is unclear, given that the authors almost do not utilise it for any theoretical derivations. The notation appears primarily illustrative rather than operational. Experimental Designs Or Analyses: The experimental designs and analyses are mostly sound and valid for the problem at hand. There are concerns for representativeness of accuracy due to limited data and annotators disagreement (only three people). A few suggestions and comments: 1) The paper does not clearly explain how the random baseline was constructed. Detailed information on this would enhance understanding and evaluation. 2) Figure 8 appears to have non-integer values for the number of agents ( we can't expect to have 1.5 or 4.5 agents). Adjusting the x-axis to display only integer values would be more appropriate. 3) I was puzzled by Annotation guideline (Figure 10) point c). Does it force to choose agent even when none can be found by an annotator? This can potentially lead to higher levels of uncertainty and some annotators in the debate biasing the rest. 4) In Section 2 the formulation of the problem defines a turn-based protocol. While it can be seen as a problem scope for this paper, an introduction in mathematical formulation sounds like this is the only way LLM agents can act, which is not true. Clearly specifying that this is the scope for this paper will make it more clear. Supplementary Material: I read through all of the appendix. And selectively reviewed the downloaded zip folder. Relation To Broader Scientific Literature: The paper contributes to the broader scientific literature by addressing the challenge of automated failure attribution in large language model (LLM) multi-agent systems. Here are the key contributions and their relation to prior findings and ideas: 1. Automated Failure Attribution aspect: Prior research has largely focused on using LLMs for various evaluation tasks, leveraging LLMs to reduce human labor. The paper introduces the concept of automated failure attribution within LLM multi-agent systems, proposing methods to identify the agent and steps responsible for failures. This extends the application of LLMs from evaluation to a more diagnostic role. 2. Who&When Dataset construction: Existing datasets often focus on evaluating model outputs in isolation without detailed annotations linking failures to specific agents and steps (like DevAI and SWE-Bench). The Who&When dataset fills this gap by providing failure logs from 127 multi-agent systems, with annotations that link failures to specific agents and errors. While this dataset supports more detailed and systematic failure analysis than previously possible, it still lacks of comprehensiveness. It can be better positioned as a first step towards automated failure attribution rather than finalised dataset that can be used to solve the problem. 3. Evaluation of Failure Attribution Methods: Manual failure attribution has been the norm (Gu et al., 2024; Tan et al., 2024; Zheng et al., 2023), with a significant labor cost and potential for human error. This paper evaluates three automated methods—All-at-Once, Step-by-Step, and Binary Search—demonstrating their respective advantages and limitations in identifying failure-responsible agents and steps. This empirical evaluation provides a first step towards understanding of the effectiveness of these methods, offering a benchmark for future improvements. Essential References Not Discussed: In my opinion, all the required references are there contributing to a structured and logical flow. Other Strengths And Weaknesses: The paper presents notable strengths in focusing on a new and increasingly critical area of research—automated failure attribution in LLM multi-agent systems. The growing complexity of agentic workflows means that effective debugging is essential. The authors have taken a significant first step by identifying this problem and introducing a new dataset and a few methods designed to address failure attribution. The currently weak performances of the proposed methods only further highlight the complexity of the task. The empirical evaluations of the All-at-Once, Step-by-Step, and Binary Search methods pave the way for future advancements and provide a crucial foundation for ongoing research. The paper also has its weaknesses. The constructed dataset is somewhat limited, which may restrict the generalizability of the results. The accuracy of the proposed methods is primarily based on this limited data and involves only a few annotators, leading to potential issues with the reliability and validity of the ground truth. This limited scope could result in higher levels of uncertainty and potential biases in the findings. To ensure more robust and widely applicable results, future work should consider expanding the dataset and involving a more extensive and diverse pool of annotators, agentic designs and logs to establish clearer and more certain ground truth data and robust findings. Other Comments Or Suggestions: Here is a list of various typos and unclear sentences: 1) Page 1 line 039, second column: manual efforts involve(s) 2) Couldn't find a reference to Figure 1 in the text 3) Page 4 lines 180-181: "We exclude the entire GAIA dataset" - but prior to that you say that you use randomly sampled instances of GAIA. It is confusing. Do you mean you exclude the rest of GAIA dataset? 4) Page 4, lines183-184, second column: "for both normal people and domain experts" :) It assumes that domain experts are abnormal. Suggest to change it to non-experts. 5) Page 5, line 221, first column: "we are thinking of performing" - assuming the rest is in past tense, it would be better to keep it as past tense. After all, you have already reported it. 6) Page 12 Figure 7 description "the explicit specify reasoning" 7) Page 12 line 630, "We only want(s)" - same in line 653 8) Page 12 line 634 "attribution in both ~two~ metrics" 9) Page 12 lines 655 "Although may provide some improvement<…> - reads like a comment, needs to be a full sentence. 10) Page 13 line 665 'specify" -> specifies 11) Figure 8 x-axis does not make sense with non-integer values Figure 10 Others b) needs rephrasing and checking. Questions For Authors: No other questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable time and insightful feedback! We address each of your questions in our responses below. **[Re Comment 1 on Claims & Comment 4 on Experiments: Need to specify the scope. ]** We will incorporate all your suggestions and make the following clarifications and revisions in future versions, Specifically: We will explicitly state at the beginning of Section 2 that the initial failure attribution task targets turn-based LLM multi-agent systems, aligning with prior work and widely adopted agentic frameworks [1][2]. We will also emphasize that the protocol is rigorously defined, as detailed in the Background of Section 2. To better inform readers about the scope and diversity of Who&When, we will expand the statistical overview in Section 3. This will include the total number of agents, a summarization of entire action steps, and detailed tools information—facilitating better adaptation to specific use cases. [1] Wu, Qingyun, et al. "Autogen: Enabling next-gen llm applications via multi-agent conversation." [2] Li, Guohao, et al. "Camel: Communicative agents for" mind" exploration of large language model society." **[Re Comment 2 on Claims: The subjectivity in annotation.]** The disagreement number reported in Figure 2 reflects only the initial stage of annotation. Annotators are required to reach a consensus by strictly adhering to the problem formulation in Section 2 rather than relying on subjective judgment. During debating, expert annotators must justify their positions based solely on this formal definition, ensuring precision and unambiguousness. We will clarify this in the revised Section 3.2. **[Re Comment 1 on Methods: Need to provide empirical results that demonstrate the cost calculations.]** Thank you for highlighting this detail! We have provided empirical cost calculations for all methods in Table 3. The results indicate that the Hybrid Method incurs the highest cost, followed by Step-by-Step, Binary Search, and All-at-Once. We would be happy to discuss them if you have further questions! **[Re Comment 1 on Theory: The practical utility of mathematical notations.]** The mathematical notation introduced in Section 2 aims to rigorously define failure-responsible agents and decisive error steps. These definitions (1) enable readers to clearly understand the research problem without ambiguity, and (2) establish strict guidelines for subsequent annotations in constructing the Who&When dataset. **[Re Comment 1 on Experiments: Details of Random search.]** The random baseline is constructed using a uniform selection strategy over all available options. Step-level accuracy is defined as the inverse of the average number of steps per problem, and agent-level accuracy as the inverse of the number of agents per problem. Final baseline performance is obtained by averaging these values across all data. We will include this explanation in the revised manuscript. **[Re Comment 2 on Experiments: Adjusting the x-axis in Fig. 8.]** Thank you for the suggestion. We will adjust the x-axis in the revised manuscript by removing non-integer values for clarity. **[Re Comment 3 on Experiments: Does it force annotators to choose an agent even when none can be found?]** Yes. We force annotators to choose one agent with their best rather than passively accepting others’ opinions; we also explicitly require annotators to highlight any uncertain annotations as shown in guideline (b). This helps maintain accuracy and eliminate biases. We will clarify this intention further in our revised manuscript. **[Re Weakness 1 & Suggestion: The constructed dataset is somewhat limited & future work should consider expanding the dataset.]** Thank you for the constructive suggestions! A key direction for future work is expanding the dataset to further benefit the research community. We also kindly emphasize that Who&When already covers a substantial scope: it includes 127 LLM-based multi-agent systems, 201 agents using 48 tools, and 4,092 action steps. The annotation process required 84.3 hours of expert effort across three annotators. **[Re Other Comment:]** We sincerely appreciate your detailed feedback and will incorporate all your suggestions. Specifically: **(1), (5), (7), (8):** We acknowledge the typographical errors and will correct verb tenses, third-person singular usage, and the redundancy in "both two." **(2):** We will add explicit references to Figure 1 at lines 48 and 70 when introducing "manual failure attributions" and "automated failure attribution." **(3), (4), (6)** We will revise all these unclear phrasings in accordance with your recommendations! **(9):** We will rephrase for clarity: "Although strong reasoning models yield improvement on the Who&When dataset, their performance remains insufficient for practical use in failure attribution." **(10):** We will remove non-integer x-axis values and revise the instruction to: "Record all uncertain annotations." --- Rebuttal Comment 1.1: Comment: Thanks for the response. I really like this paper and maintain the current rating as it is at 'Accept' level already but I believe all of the improvements will make this paper more solid. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your valuable time and many constructive comments. Thank you for highlighting the strengths of our work! We will incorporate all your suggestions into our next version accordingly!
Summary: This paper introduces automated failure attribution for LLM-powered multi-agent systems, addressing the problem of identifying which agent causes a task failure and at which step the decisive error occurs. The authors formally define this research area, propose Who&When, a dataset with annotated failure logs from 127 LLM-powered multi-agent systems, and evaluate three automated attribution methods. Their experiments show significant performance variability related to context length and model choice, highlighting that current LLM-based methods are not yet practically usable and underscoring the need for further research. Claims And Evidence: (1) Problem Establishment: The authors rigorously formalize and motivate the automated failure attribution problem in LLM-powered multi-agent systems (Section 2). (2) Key Findings: The extensive experiments in Section 4 yield seven important insights, such as the relationship between context length and failure attribution performance. To me, these insights meaningfully guide future advancements in the field. Methods And Evaluation Criteria: The authors mainly investigate three automated failure attribution methods they proposed in this work on the Who\&When benchmark. Additionally, they also proposed two main metrics they established in this area, i.e, agent-level accuracy and step-level accuracy. The benchmark, the evaluation metrics and the methods they proposed makes senses to me and well-motivated. Theoretical Claims: The authors’ theoretical analysis are mostly around problem formulation and the cost analysis in Appendix. The problem formulation makes senses and rigorous. I also checked the Appendix and the conclusion are also correct. Experimental Designs Or Analyses: I have discussed the experimental designs in the previous parts. I like the finds listed in experimental section like the correlation between context length and failure attribution performance. Supplementary Material: I checked all content. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** (1) The paper is overall well-structured and well written. The explanations are easy to follow and the logic flows are reasonable. The quality of the paper is very good. (2) The problem itself is very interesting and important. The problem formulation is mostly clear (see comments before) and the authors provide intuition about how it this problem is important. I am convinced by these arguments. (3) The authors propose the first benchmark named Who&When for failure attributions with established metrics. These metrics are well-motivated and makes senses to me. I have also checked the anonymous repository associated with the paper. The dataset quality is great and the annotations are also in a high-standard. (4) The authors perform extensive experiments on the proposed benchmarks and the conclusions are meaningful and could potentially guide the research in this new area. **Weakness** (1) Why not making a comparison/ do some analysis with Agent-as-Judge in your experimental section? If I understand correctly, the agent-as-judge could also be applied to this area with some mirror modifications. (2) Surprisingly, I see DeepSeek R1 performs worse than GPT-4o in step-level accuracy, while OpenAI O1 model is worse than GPT-4o in Agent-Level accuracy. Does the authors have some explanations for that? Some discussion needed to be included in the paper. If these conclusions hold, do you have some plan to incorporate model selection in the failure attribution procedures? (3) Different methods seem to have different advantages. For example, all-at-once is good at picking up mistake agent and step-by-step is good at picking up mistake step. Why not combining them? Other Comments Or Suggestions: 1. Experiments in Section B.2 should be incorporated in main experiments but not Appendix. Reasoning models are also LLMs and it should be compared with other models in Figure 3. 2. The experiments in Section B.2 are conducted on subset of Who&When or the entire dataset? I am not able to find any information about this on the paper. Questions For Authors: (1) Why the reasoning models perform worse than GPT-4o? Does the author have some explanation for this? (2) The experiments in Section B.2 are conducted on subset of the Who&When datset or the entire dataset? I could not find any information about that on the paper. (3) Why not performing experiments on the entire Who&When on all models in the experimental section? The experiments setting should be consistent. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful comments! Please find our response to your comments below. **[Re Weakness 1: Why not make a comparison/do some analysis with Agent-as-Judge in your experimental section?]** We appreciate this suggestion; however, the research objective of our failure attribution task significantly differs from that of Agent-as-Judge, making a direct comparison unsuitable. Specifically: - **Distinct Research Objectives:** Agent-as-Judge employs LLM agents primarily to evaluate the completeness and quality of coding outputs generated by LLMs. Its main focus is on assessing delivered coding projects rather than identifying errors within agentic systems themselves. - **Different Target Objects:** In Agent-as-Judge, the evaluation targets the final outputs (coding projects) produced by LLMs. Conversely, our failure attribution research specifically examines agentic systems, explicitly aiming to pinpoint the responsible agents leading to failures and identify the exact decisive-error step. Given these substantial differences, we have not incorporated Agent-as-Judge into our current experimental setup. We will elaborate further on these differences in the related work section. **[Re Weakness 2 & Question 1: Need more explanations for Table 4 results.]** Thank you for the suggestion. We would like to clarify that the most strong reasoning model (OpenAI O1) still achieves the best performance on three out of four metrics, as highlighted in bold in the original table. However, we do observe instances where the reasoning model performs worse than GPT-4o. This could be attributed to several factors: reasoning models excel at tasks requiring complex logical inference, such as challenging mathematical problems, but they are not necessarily optimized for parsing intricate logs or identifying errors in complex agent-based systems, as these tasks fall outside their training objectives. Developing effective failure attribution methods is an important avenue that merits further research. **[Re Weakness 3: Why not combine different failure attribution methods? ]** We would also like to clarify that we did conduct experiments combining different failure attribution methods; these results are presented in Table 3. Our findings indicate that the Hybrid Method indeed achieves the best performance across both metrics.Thanks for the feedback! **[Re Weakness 4: Experiments in Section B.2 should be incorporated in main experiments but not Appendix.]** Thanks for the suggestions. We will consider moving it to the main paper after all additional experiments are done for these models. **[Re Suggestion 2 & Question 2: The experiment's setup.]** We conducted these experiments using the same experimental setup as described in the ablation studies in Section 4.4. Due to cost considerations, we did not use the complete dataset for these experiments. Thank you for your suggestion; we will clarify this point further in future revisions. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I like this work for the research area it explores. I will maintain my original rating. --- Reply to Comment 1.1.1: Comment: Thank you very much for your acknowledgment and valuable suggestions. We will incorporate these revisions into our next manuscript accordingly!
Summary: The paper introduces a new research problem of automated fault attribution in multi-agent systems. The task includes identifying both the agent and the corresponding step that lead to task failure. To study this task, a new benchmark dataset called Who&When is created by manually labeling 127 failure logs. Three prompting-based approaches are proposed and evaluated on the benchmark. Experimental results indicate that even SOTA LLMs struggle on this task. Claims And Evidence: The main claim made in the paper is that a new task for automated failure attribution is proposed, which is important and challenging. Their experimental results with SOTA LLMs highlight the challenging nature of this task. However, there are some issues with the task. There is no discussion of existing work on verifiers, particularly process reward models (PRM)[1, 2], that also aim to identify errors at the step-level. How is the proposed task different from this line of work? I understand that the proposed task is to identify the root-cause step that caused failure, rather than “all” erroneous steps, as done in existing work. But, can existing methods and datasets [3] least be leveraged for this task? For example, a simple baseline could be to choose the earliest erroneous step identified by PRM. The other sub-task is to identify the failure-responsible agent. However, once the decisive error step is identified, identifying the corresponding agent becomes trivial. b. The current task formulation seems highly subjective as shown in Figure 2, with up to 50% disagreement between human annotators. The authors mention that consensus was reached via discussions. But, is the task well-defined or does is a different task design required, such as choosing a set of problematic steps instead of a single one? Without this understanding, the practical utility of the current task and benchmark data remains questionable. [1] Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., ... & Cobbe, K. (2023, May). Let's verify step by step. In The Twelfth International Conference on Learning Representations. [2] Wang, P., Li, L., Shao, Z., Xu, R. X., Dai, D., Li, Y., ... & Sui, Z. (2023). Math-shepherd: Verify and reinforce llms step-by-step without human annotations. arXiv preprint arXiv:2312.08935. [3] Zheng, C., Zhang, Z., Zhang, B., Lin, R., Lu, K., Yu, B., ... & Lin, J. (2024). Processbench: Identifying process errors in mathematical reasoning. arXiv preprint arXiv:2412.06559. Methods And Evaluation Criteria: Given the high disagreement between annotators, some more discussion around reliability of the ground truth data would be useful. Why exactly is the task challenging? Are the disagreements between annotators usually within a range of steps? The dataset size is quite small. Proposed “step-by-step” method seems to assume that the agent cannot correct its actions. The high performance achieved by this method could suggest that the annotated logs do not contain self-correction steps, even though it is a popular strategy [4]. A more diverse benchmark dataset containing such LLM strategies would help draw more general conclusions. [4] Pan, L., Saxon, M., Xu, W., Nathani, D., Wang, X., & Wang, W. Y. (2023). Automatically correcting large language models: Surveying the landscape of diverse self-correction strategies. arXiv preprint arXiv:2308.03188. Theoretical Claims: Cost analysis in Appendix E is useful and correct. Experimental Designs Or Analyses: Figure 5 error bars are too large to draw any meaningful conclusion. Were the results averaged over multiple LLM runs? What is the variance across runs? Supplementary Material: yes Relation To Broader Scientific Literature: The paper builds upon existing work on identifying step-level errors in LLM execution logs, and extends it to pinpoint the most severe error that led to failure. Proposed LLM-as-judge approaches, such as binary search, could potentially be useful for other long-text evaluation tasks. Essential References Not Discussed: Existing work on identifying step-level errors is missing, even though main task proposed in the paper is to identify the decisive failure step. [1] Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., ... & Cobbe, K. (2023, May). Let's verify step by step. In The Twelfth International Conference on Learning Representations. [2] Wang, P., Li, L., Shao, Z., Xu, R. X., Dai, D., Li, Y., ... & Sui, Z. (2023). Math-shepherd: Verify and reinforce llms step-by-step without human annotations. arXiv preprint arXiv:2312.08935. [3] Zheng, C., Zhang, Z., Zhang, B., Lin, R., Lu, K., Yu, B., ... & Lin, J. (2024). Processbench: Identifying process errors in mathematical reasoning. arXiv preprint arXiv:2412.06559. Other Strengths And Weaknesses: Strengths The problem is well motivated Extensive analyses were conducted. Analysis and findings on consistency of the performance of proposed methods across LLMs is particularly useful. For weaknesses, please refer to all issues above mainly around task formulation, dataset reliability and generality. Other Comments Or Suggestions: Lines 036-040 and 047-059 are weirdly phrased Questions For Authors: How often is the decisive error step same as the earliest error made by any agent? What length values do the 5 levels in Figure 4 correspond to? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable time and insightful feedback! Due to text limit, additional experiments are shown in anonymous link: **https://shorturl.at/JSeJd**. **[1. Discussion needed for verifiers & Can [3] be leveraged?]** Thank you for the suggestions! We acknowledge the necessity of discussing verifiers. We will add a dedicated subsection in related work, including **all the references you mentioned**. We also kindly emphasize that failure attribution is a fundamentally different research problem, and verifier, like PRMs are not directly applicable: PRMs are designed to reward well-structured reasoning chains in tasks like Math. In contrast, our failure attribution targets multi-agent systems, where turn sequences do not necessarily form a coherent reasoning chain. Single agent step may involve multiple reasoning steps—or none—while different agents may address distinct subtasks, with turn-taking reflecting task decomposition rather than unified reasoning. So, ProcessBench does not apply to our task. In response to your suggestions, we evaluated these PRMs in [3] for failure attribution under the same setting as Tab. 1 and shows the results in **https://shorturl.at/JSeJd**. It shows that no PRM outperforms All-at-Once and Step-by-Step in two metrics. This performance gap further underscores the fundamental difference between the objectives of PRMs and the proposed failure attribution task. **[2. Task formulation is subjective.]** We kindly clarify that the final annotation is not subjective. The disagreement number in Fig. 2 reflects only the **initial stage** of annotation, where approximately 20% cases were flagged as ambiguous. In subsequent stages, annotators rigorously followed the deterministic formulation in Sec. 2, engaging in discussion and cross-validation to ensure the final annotations were accurate. **[3. Why is the task challenging?]** It is challenging because: Agentic logs are very difficult to interpret. On average, each failure case includes 30 steps and 4,695 words and links to a total of 48 tools. Annotators must manually reconstruct agent actions, which requires understanding the complex action logic and manual run tool to verify behavior. Annotators may need to manually solve the task—either to obtain a "golden trajectory" for reference or to resume from intermediate states to assess whether a step constitutes a decisive error (as defined in Sec. 2), which demands replicating the agent’s conditions over multiple steps and is often hard. **[4. The results suggest the data do not contain self-correction. & How often is the decisive error step the same as the earliest error?]** Self-corrections are common in Who&When, which is reflected in the low average step-level accuracy of 14.2% in Tab. 1. The decisive-error step might occur after the prediction step of Step-by-Step. In light of your suggestions, we conducted additional round of annotations—using the same procedure—solely on the hand-crafted systems (considering limited time) to identify the first-error step. We then compared it to the decisive error step and reported the overlap percentage in **https://shorturl.at/JSeJd**, which shows that in nearly half of the cases, they do not overlap. **[5. Dataset is small.]** We plan to expand the dataset in extended work. However, we respectfully argue that the current dataset is non-trivial and sufficient for drawing meaningful conclusions. Our main findings—such as the relative ranking of three methods across metrics—remain robust regardless of dataset scale. In Tab. 1, these rankings hold across both algorithm-generated and hand-crafted systems. The dataset includes 201 distinct agents, 4,092 steps, an average of 4,695 words per trajectory, and 48 external tools. Annotation required 84.3 hours by three experts. which offers a rich testbed for evaluating failure attribution methods. **[6. Were the results averaged over multiple runs?]** We did not initially run multiple LLM trials because: (1) the randomness has minimal impact given the task’s difficulty, (2) small variations don’t affect the paper’s core conclusions—such as the relative ranking of methods- and (3) extreme cost. In response to your suggestion, we conducted five more runs under the same setting as Tab. 4 and report the results in **https://shorturl.at/JSeJd**. We found the findings remain consistent. **[7. Are the disagreements within a range of steps?]** No, the ambiguous decisive error between annotators does not exhibit a correlation with their positional distance. **[8. Regarding Fig. 4 and 5]** We will replace Fig. 5 with a table to improve clarity. Additionally, we will clarify the step ranges in Figure 4: Level 1 spans 5–17 steps, Level 2 covers 19–29, Level 3 includes 31–49, Level 4 ranges from 51–91, and Level 5 spans 93–130 steps. **Thank you for your time and consideration! We sincerely hope that you find our responses convincing and would consider increasing the rating.**
null
null
null
null
null
null
null
null
Guarantees of a Preconditioned Subgradient Algorithm for Overparameterized Asymmetric Low-rank Matrix Recovery
Accept (poster)
Summary: This paper establishes theoretical guarantees for the preconditioned subgradient algorithm in the context of overparameterized low-rank matrix recovery (LRMR), with a particular focus on the non-smooth case. It is rigorously proven that the proposed preconditioned subgradient method achieves linear convergence to the true solution. The paper is well-written, with clear and coherent exposition, and presents compelling results that contribute meaningfully to the field. Claims And Evidence: The authors assert that the convergence rate of the proposed preconditioned subgradient method is independent of the condition number of the target matrix, as stated in Theorem 5.4. However, the experimental results presented in Figure 4 are insufficient to fully validate this claim. Specifically, the relative error in Figure 4 reaches magnitudes as small as 1e-6, which is still too large to conclusively demonstrate the entire convergence process. To strengthen their argument, the authors are encouraged to provide additional experimental evidence showing the relative error at a much finer precision, such as $10^{-14}$, to ensure the robustness of their convergence analysis. Furthermore, the results in Figure 4 indicate differences in convergence rates across varying condition numbers $\kappa$, where $\kappa$ is set to relatively small values (20, 40, 60, 80, 100). To more rigorously test the independence of the convergence rate from the condition number, the authors should consider conducting experiments with significantly larger values of $\kappa$, such as $10^3$, $10^4$, and beyond. This would provide a more comprehensive evaluation of the method's performance under a wider range of conditions and strengthen the empirical support for their theoretical claims. Methods And Evaluation Criteria: Yes. Theoretical Claims: The authors claim that the convergence rate of OPSA is independent of the condition number of the target matrix $X_*$. However, in Theorem 5.4, they assume that $\sigma_r(X_*)=1$, which implies that ${\kappa} (X_*)={||X||}$ (the operator norm of matrix $X_*$). This assumption suggests that the convergence results are inherently tied to $||X_*||$, creating an apparent contradiction with the claim of condition number independence. Specifically, if the convergence rate depends on $||X_*||$, it indirectly depends on the condition number $\kappa(X_*) $, given the assumption $\sigma_r(X_*)=1$. This discrepancy warrants further clarification from the authors to reconcile the theoretical claims with the assumptions made in the analysis. Experimental Designs Or Analyses: Regarding the experimental results in Figure 5, similar to Figure 4, the relative error is only shown up to a precision of 1e-6. To strengthen their argument, the authors are encouraged to provide additional experimental evidence demonstrating the relative error at a much finer precision, such as $10^{-14}$. This would offer more robust support for the claimed convergence properties of the algorithm. Additionally, the authors state in line 435 that with a large value of $\lambda$, OPSA may become trapped in a local minimum. However, this conclusion lacks theoretical solidity. Specifically, for this non-convex optimization problem, if local minima exist, it remains unclear how a gradient/subgradient-based algorithm like OPSA can escape these local minima and converge to the global solution. As observed in Figure 4, the results for $\lambda=10$ show that OPSA has not yet converged, even after the presented number of iterations. To provide a more comprehensive understanding, the authors should extend the iteration count to at least $10^4$ and present the corresponding results. This would help clarify whether OPSA can eventually escape local minima and achieve global convergence under varying conditions. Supplementary Material: The proof of the main theorem. Relation To Broader Scientific Literature: The key contributions are closely related to the broder scientific literature. Essential References Not Discussed: Not available Other Strengths And Weaknesses: 1. The authors claim that the distance metric introduced in Eq. (22) is novel. However, to the best of my knowledge, this appears to be only an incremental improvement over the distance metric proposed by Tong et al., 2021a. While the modification may offer certain advantages, it does not constitute a fundamentally new contribution. 2. The setting of the parameter $\lambda$ is crucial to the effectiveness of the proposed method. In similar works, such as Zhang et al., 2023a and Xu et al., 2023, the authors provide thorough discussions on the selection and impact of $\lambda$. In contrast, this paper lacks sufficient theoretical or empirical discussion on the choice of $\lambda$, offering only marginal experimental insights. A more detailed analysis of how $\lambda$ influences the algorithm's performance, along with a broader range of experiments, would significantly strengthen the work. 3. In the References section, the entries for Zhang et al., 2023a and Zhang et al., 2023b appear to be identical, which seems to be an error. The authors should carefully review and correct this duplication to ensure the accuracy and integrity of the references. Other Comments Or Suggestions: Not available Questions For Authors: Please refer to the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their effort to review our paper and for the constructive feedback! We address all reviewers' comments/concerns below. > Summary: The paper is well-written, with clear and coherent exposition, and presents compelling results that contribute meaningfully to the field. We would like to thank the reviewer for recognizing the significance of our contribution and the positive evaluation of our work! > Claims And Evidence >> The authors assert that .... Following the reviewers’ suggestion we have accordingly updated Figure 4. The updated Figures can be found [here](https://ibb.co/WpH1yZvq) and [here](https://ibb.co/YBcJSmd4), and show linear convergence to ground truth up to the order of $10^{-13}$ (not exactly zero relative error due to numerical limits and roundoff errors). >> Furthermore, the results in Figure 4 indicate .... We have carried out additional experiments considering higher values of condition numbers (i.e., $\kappa=1000, 10000$). The new empirical results can be found [here](https://ibb.co/rKJhrFn3) and [here](https://ibb.co/svZ9Qnv0), and show that our proposed algorithm still converges linearly even for significantly larger condition numbers and more challenging scenarios. >Theoretical Claims We believe there is a misunderstanding here. As noted in Remark 5.16, the initialization condition does depend on the condition number of $\kappa(X_\ast)= || X_\ast ||_{op}$. However, the **rate of convergence** provided in Theorem 5.4 is $1 - \frac{0.12}{\chi^2}$ and it is **independent of $\kappa(X_\ast)$**. We have further emphasized this distinction in the revised version of our paper. > Experimental Designs Or Analyses >> Regarding the experimental results in Figure 5... Please see our response above and the updated figures provided in the anonymous links therein. >> Additionally, the authors state ... Thank you for this comment! Indeed, the statement we made was incorrect and we have accordingly revised it. An extremely large $\lambda$ will significantly slow down convergence but not lead to local minima if the initialization condition given in Theorem 5.4 is satisfied. We verified this claim in the additional experiments we conducted and have included them in the revised paper. In the updated figures, which can be found [here](https://ibb.co/G32nRDnY) and [here](https://ibb.co/KJhrpYS), we observe that OPSA converges linearly for a higher value of $\lambda$ but now much slower, requiring $\approx 10000$ iterations to converge. > Other Strengths And Weaknesses >> The authors claim that the distance metric introduced in Eq. (22) is novel. We would like to point out that the distance metric itself is a key ingredient that led to the technical contributions needed for proving Theorem 5.4. Specifically, this new distance metric renders the previous proof techniques used in Tong et al., 2021a, not applicable in this overparameterized rank setting that we address in this paper. We would like to refer the reviewer to our detailed responses above on a similar concern raised by Reviewers Uzmr and oy8C. >> The setting of the parameter $\lambda$ ... As reported in our numerical results (see Figure 5), OPSA is emperically robust to a wide selection of $\lambda$, showcasing good performance and similar iteration complexity for a range of values from $10^{-4}$ to $2$. Theoretically speaking, from our existing theoretical results, we can easily derive a lower bound for $\lambda$, i.e., $\lambda \geq c$ with $c$ being some universal positive constant. On the other hand, an extremely large $\lambda$ will empirically and theoretically slow down the convergence, as we have discussed above, also see the updated figures ( [here](https://ibb.co/G32nRDnY) and [here](https://ibb.co/KJhrpYS)) and Theorem 5.4. By setting a desired convergence rate, an upper bound of $\lambda$ can also be computed, which resembles the bound in Xu et al, 2023. That said, to choose a mild $\lambda$ that works with OPSA is not difficult in many cases. We have added a short discussion on this in the revised version of the paper. >> In the References ... Thank you for pointing this out! We have fixed this issue in the revised version of our paper.
Summary: In "Guarantees of a Preconditioned Subgradient Algorithm for Overaparametrized Asymmetric Low-rank Matrix Recovery" the authors provide a novel method to solve robust asymmetric and overparametrized matrix sensing problems without the rate scaling with the condition number of the solution matrix. The authors provide numerical and theoretical evidence of the algorithm's performance. Claims And Evidence: Main claim: The proposed algorithm converges linearly when applied to robust and overparametrized matrix sensing problems. Further, the convergence rate is independent of the ground truth matrix condition number. The authors establish that the claim is true in theorem 5.4 and provide numerical evidence that even with overparametrization, the convergence rate does not degrade as abruptly as that of existing schemes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not check the correctness of the proofs. Experimental Designs Or Analyses: Yes, I do not think the experimental design has issues. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The authors suitably position their work in the literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses: 1) No bounds for RIP constants (and consequently restricted smoothness and sharpness) are provided for the Gaussian case. 2) Requires spectral initialization (but this is standard in the robust case) 3) Rate dependence on the restricted condition number of the loss, which may be dimension dependent. 4) sensitivity to hyperparameter lambda which requires a lot of knowledge to be tuned appropriately. Other Comments Or Suggestions: N/A Questions For Authors: Q1) The authors introduced the mixed norm RIP. Can the authors compare their notion of RIP to those existing in the literature? Other notions of RIP have been introduced in the robust setting. Q2) Is it possible to establish guarantees for the RIP for Gaussian matrices? As is, it is tested empirically. It would be good to know whether the proposed notion of RIP scales with increased d. The provided simulations only provide results for fixed rank 10, while the restricted smoothness and sharpness must hold for matrices of rank d. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for taking the time to review our paper and the valuable feedback! Next, we provide point-by-point responses to all reviewer’s comments and concerns. > Other Strengths and Weaknesses >> No bounds for RIP constants (and consequently restricted smoothness and sharpness) are provided for the Gaussian case. Assuming a measurement operator with Gaussian i.i.d. $\frac{1}{p}\mathcal{N}(0,1)$ for the matrix sensing problem, we can invoke the result of Tong et al 2021, which gives the following bounds for the RIP constants: - $\delta^{-}_{2d} \gtrsim 1$ - $\delta^{+}_{2d} \lesssim 1$ - $\delta_0 \gtrsim 1 - 2p_s$ as long as the sample complexity satisfies $p \gtrsim \frac{(m+n)d}{(1-2p_s)^2}\mathrm{log}( \frac{1}{1-2p_s})$, where $p_s$ is the outlier's ratio. A short discussion on this has been added to the revised paper. >> Requires spectral initialization (but this is standard in the robust case) Indeed! Our work, similarly to other SOTA robust matrix sensing works, e.g. Tong et al 2021, requires spectral initialization. For future work, random initialization is one of the tasks we will try to tackle for robust matrix sensing. >> Rate dependence on the restricted condition number of the loss, which may be dimension dependent. You are right. The rate of convergence depends on $\chi$ which is the condition number of the loss. Unfortunately, to the best of our knowledge, this dependence of $\chi$ is something that all current SOTA works in this area share. We will work on removing this dependence in future works. >> Sensitivity to hyperparameter lambda which requires a lot of knowledge to be tuned appropriately. As shown in the experimental section, the performance of our algorithms is quite robust to a wide selection of $\lambda$. For details, we refer the reviewer to Figure 5. Note, however, that a massive $\lambda$ value may slow down the convergence, as it matches our theory (Please see also our [response](https://openreview.net/forum?id=GaCo82yC7z&noteId=hMHh5h5Ohj) to Reviewer Uzmr on this). > Questions For Authors: >> Q1) The authors introduced the mixed norm RIP. Can the authors compare their notion of RIP to those existing in the literature? Other notions of RIP have been introduced in the robust setting. Thank you for this question. The mixed-RIP norm has been actually used in prior works e.g. Tong et al 2021. We refer the reviewer to our [response](https://openreview.net/forum?id=GaCo82yC7z&noteId=2UnKEoGVze) to Reviewer oy8C, and [Char21], which shows the analytical derivation of this mixed-RIP condition. >> Q2) Is it possible to establish guarantees for the RIP for Gaussian matrices? As is, it is tested empirically. It would be good to know whether the proposed notion of RIP scales with increased d. The provided simulations only provide results for fixed rank 10, while the restricted smoothness and sharpness must hold for matrices of rank d. We would like to refer the reviewer to our previous response above on this topic. Indeed, the sample complexity bound now scales linearly with $d$ instead of $r$. Please also note that in the synthetic experiments, the true rank $r$ was always overestimated with $d\geq r$. Our reported results show linear convergence in the overaparameterized rank regime, validating our theory.
Summary: The paper presents a preconditioned subgradient method for robust low-rank matrix sensing using a Burer-Monteiro factorization, focusing on the case where the rank $r$ of the ground truth signal is not known. The preconditioner used is a straightforward modification of the preconditioner proposed by [Xu et al.](https://arxiv.org/abs/2302.01186) tailored to the asymmetric case. The authors show that the resulting subgradient method, when initialized near the solution set, converges to a global minimizer at a geometric rate independent of the condition number of the unknown signal and present favorable numerical results on synthetic problems. Claims And Evidence: The main claim in the paper -- namely, geometric convergence independent of the condition number -- is supported both theoretically and experimentally. I have a few issues and questions related to other claims made in various places in the submission: - In Remark 5.6, the authors state that "the initialization condition is negatively affected as this condition number increases" (ostensibly referring to the condition number of $X_{\star}$). However the initialization condition only depends on the spectral norm of $X_{\star}$, which can be kept constant as the condition number increases. Is there a typo in the statement Theorem 5.4? Please clarify. - The sentence in Line 434 (left column) reads: "if the $\lambda$ parameter is set too large, OPSA may get stuck at some local minimum". I suspect this interpretation of the numerical evidence is wrong -- in particular, Theorem 5.4 imposes an initial condition of $\mathrm{dist}(F_0, F^{\star}) \leq \lambda \epsilon$, which is *milder* as $\lambda$ increases. My understanding, based on Theorem 5.4, is that larger $\lambda$ simply push the contraction factor $\rho(\chi, \delta, \epsilon)$ closer to 1 leading to very slow convergence rather than stagnation. Please clarify this point as well. Methods And Evaluation Criteria: The method is evaluated on synthetic problem instances, which is standard in the literature on matrix sensing. Experiments on non-synthetic benchmarks, especially regarding the tuning of the preconditioner's $\lambda$ parameter, could strengthen the paper but are not necessary. Theoretical Claims: I am familiar with the literature on (overparameterized) matrix sensing and the proof outline makes sense to me. However, I only checked the proofs at a high level. I believe the theoretical claims contain some typos, which I will list under the "Other comments or suggestions" portion of the review. Experimental Designs Or Analyses: No issues with experimental designs or analyses. Supplementary Material: I reviewed the supplementary material at a high level to understand the key steps of the proof of Theorem 5.4. Relation To Broader Scientific Literature: The authors have done an adequate job positioning the paper within the broader literature on matrix sensing. However: - The proposed algorithm is essentially the same as the one in [Cheng & Zhao](https://ieeexplore.ieee.org/document/10446187). In particular, both algorithms treat the loss function as the composition of a well-conditioned "outer" loss with a poorly-conditioned "inner mapping", and apply the exact same preconditioner to the subdifferential/gradient of the "outer" loss. This work is cited in the paper but omitted from Table 1, while it covers the asymmetric case with unknown rank and the resulting convergence rate is independent of $\kappa(X_{\star})$. - In Table 1, the ScaledGD($\lambda$) method appears misattributed to (Xiong et al. 2023), when (to my knowledge) it was proposed in the work of (Xu et al., 2023). Essential References Not Discussed: Most essential references are already discussed. The idea of RIP under "$\mathcal{I}$-outlier bounds", which appears without attribution, can be attributed to [this paper](https://link.springer.com/article/10.1007/s10208-020-09490-9) as well as [this earlier work](https://arxiv.org/abs/1705.02356). Other Strengths And Weaknesses: Strengths: - The problem and algorithm are clearly motivated. - The numerical experiments cover a variety of regimes (albeit all being synthetic). Weaknesses: - Given the discussion under "Relation To Broader Scientific Literature", the main contribution of this paper is to cover the $\ell_1$ loss with outliers given an asymmetric Burer-Monteiro factorization. With appropriate book-keeping (e.g.,, defining the correct distance measure -- see also my next comment below), the "restricted regularity" framework followed by the authors is well-tread by analyses (Tong et al., 2021). - I disagree with the claim (under "Technical Innovation") that the distance metric in (22) is novel. It is already known from the work of (Tong et al. 2021) and (Cheng & Zhao, 2024) that the distance must be measured in the norm induced by the preconditioner (and that, for that purpose, it suffices to use the norm induced by the preconditioner evaluated at the optimal solution - this leads precisely to the norm used in (22)). It is possible that I am missing something here, and I welcome any clarification. - (Relatively minor weakness): The paper appears to have been written (or revised) somewhat hastily. There are several typos, unused notation (e.g., the Kronecker product symbol) and passages that can be polished. For example, nothing about the "low-rank matrix estimation problem" defined in Eq. (5) suggests that the solution should be low-rank! Other Comments Or Suggestions: - Please take care to remove unused notation (e.g., $\otimes$ and $\mathrm{vec}$). - Please introduce necessary notation as needed. For example $\Delta_{L}$ and $\Delta_{R}$ in the appendix are used before they are defined. - Possible typos in (2) and (3): it should be $L_t^{\mathsf{T}} L_t$ instead of $L_t L_t^{\mathsf{T}}$. Similarly for $R_t$. Moreover, you should be taking the gradient of some loss function which is missing. - Possible typo in Corollary 5.10: should the iteration complexity depend on $\delta_{2d}^+$ rather than $\delta_{2r}^+$? - Possible typo in Proposition 5.12 / Corollary 5.13: should $L$ be $\delta_{2d}^+$ instead of $\delta_{2r}^+$? - A discussion of the sample complexity of your method, especially in the presence of outliers, would be useful. - More broadly: can the dependence be improved from $\delta_{2d}^+$ to $\delta_{d+r}^{+}$? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer's time in thoroughly evaluating our paper and for providing insightful comments and feedback. Below, we provide point-by-point responses addressing each of the reviewer’s comments and concerns. > Claims And Evidence We are glad that the reviewer has found our main claim well-supported both theoretically and experimentally! >> In Remark 5.6, ... Thank you for this question. Please note that in the statement of Theorem 5.4 we say that $\sigma_r(X_*)=1$, hence the condition number $\kappa(X_\ast)$ is equal to the spectral norm of $X_\ast$ i.e., $\kappa(X_\ast) = \frac{\sigma_1(X_\ast)}{\sigma_r(X_\ast)} = \sigma_1(X_\ast) = ||X_\ast||_{op}$. >> The sentence in Line 434 ... Thank you for bringing up this point! Indeed, the wording in the paper “***OPSA may get stuck at some local minimum"*** is not accurate and we have revised it. You are right to say that a large $\lambda$ will relax the initialization condition, but slow down convergence. We verified this claim in the additional experiments we conducted, which are included in the revised paper. As observed in the updated figures, which can be found [here](https://ibb.co/G32nRDnY) and [here](https://ibb.co/KJhrpYS), OPSA still converges linearly for a higher value of $\lambda=10$ but now much slower than in the case of lower values of $\lambda$, requiring $\approx$ 10000 iterations to converge. >Methods And Evaluation Criteria To strengthen the experimental section, we provide additional experiments on robust matrix completion on real datasets, which can be found [here](https://ibb.co/5XVfRNQf) and [here](https://ibb.co/hJsTH8qm). The results showcase the merits of our approach in this real-world application and have been included in the revised version of our paper. Please see also our [responses](https://openreview.net/forum?id=GaCo82yC7z&noteId=2UnKEoGVze) to Reviewer oy8C. > Theoretical Claims Thank you for your time to review our proofs and for finding them reasonable. In the revised version of the paper, we have fixed the listed typos and some others! > Relation To Broader Scientific Literature Following your suggestion, we have added Cheng&Zhao’s [C&Z] to Table 1. Indeed C&Z's approach addresses the asymmetric and overparameterized setting, however, they focus on smooth losses, unlike our work which addresses non-smooth loss functions. We should also note that C&Z reports linear convergence in function values unlike our work, which relies on a distance metric based on updates of matrix factors. >> In Table 1, the ScaledGD() method appears misattributed ... We have fixed that in the revised manuscript. > Essential References Not Discussed That’s correct! We have added the references for these two papers to the revised manuscript. > Other Strengths And Weaknesses >>Strengths Thank you for your positive evaluation of our work! >> Weaknesses >> Given the discussion under ... Note that Tong et al. 2021 does not address the overparameterized rank setting. Our contribution generalizes their approach to include this setting, leading to non-trivial technical insights. Please refer to our [response](https://openreview.net/forum?id=GaCo82yC7z&noteId=2UnKEoGVze) to reviewer oy8C and the next comment for further details. >> I disagree with the claim ... Thank you for bringing up this point! It is true that ***the norm induced by the preconditioner evaluated at the optimal solution*** is exactly what we need to use as the distance metric. However, please note that Tong2021 uses non-regularized preconditioners, hence, our distance is a generalization of that one. Moreover, to the best of our knowledge, C&Z shows linear convergence in function values and it is not clear in the paper that the same distance metric as ours is being used at all. Note also that the distance is not the main contribution of the paper, but rather the key point for the technical innovation. In particular, demonstrating linear convergence and contraction for this distance metric is far from a trivial extension of prior work (e.g., Tong, 2021). The bounds and matrix inequalities established in earlier studies cannot be directly applied to this more general, perturbed version of the distance metric. >> (Relatively minor weakness) We have fixed all these issues in the revised manuscript! > Other Comments Or Suggestions >> Sample Complexity Discussion Assuming a measurement operator with Gaussian i.i.d. $\mathcal{N}(0,1)$ for the matrix sensing problem we can invoke the result of Tong et al 2021, which gives the following sample complexity bound for the mixed-RIP and $\mathcal{S}$-outlier bound conditions to hold w.h.p. $p \gtrsim \frac{(m+n)d}{(1-2p_s)^2}\mathrm{log}( \frac{1}{1-2p_s})$, where $p_s$ is the outlier's ratio. A short discussion on this has been added to the revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for your answers. I have revised my score upwards. > However, please note that Tong2021 uses non-regularized preconditioners, hence, our distance is a generalization of that one [...] I do not dispute that the technical points in your analysis might be nontrivial to carry out. The point that I was making is that the induced metric in which you measure convergence is "standard" once a preconditioner has been chosen, and regularized preconditioners are certainly not new in the overparameterized matrix sensing / factorization literature. > To strengthen the experimental section, we provide additional experiments on robust matrix completion on real datasets Thanks for adding these. Does your theory imply any guarantees for matrix completion? As far as I know, the operator $P_{\Omega}$ in matrix completion does not satisfy the restricted isometry property without additional assumptions. --- Reply to Comment 1.1.1: Comment: Thank you very much for raising the score, and we appreciate that you recognize our contributions. >Does your theory imply any guarantees for matrix completion? As far as I know, the operator in matrix completion does not satisfy the restricted isometry property without additional assumptions You are right that the sampling operator $P_{\Omega} $ doesn't hold RIP condition in the sense $||\frac{1}{p}P_\Omega-\mathbf{I}||$ is not uniformly bounded by a small constant (at least not with high probability) under the operator norm. Note that $||(\frac{1}{p}P_\Omega-\mathbf{I})X||\leq \epsilon$ holds for a fixed matrix $X$, but the independence requirement destroys the RIP and makes it almost useless for iterative convergence analysis. Thus, our theorem doesn't directly guarantee the convergence of matrix completion, just like most, if not all, matrix sensing convergence theorems. The RIP for matrix completion is usually presented in the form of $||P_T-\frac{1}{p}P_T P_\Omega P_T||\leq\epsilon$, where $P_T$ is a projection onto the tangent space $T$ of the low-rank manifold at the groundtruth. It holds with high probability if the groundtruth is $\mu$-incoherent, and of course, subject to the sampling pattern of $\Omega$. That said, our proofs in this paper can be used as a blueprint for the matrix completion case. We anticipate that the proof can be worked out by following our blueprint, but the technical details will be more involved as the RIP is presented in a different way. We hope this answers your question. We also encourage other reviewers to have follow-up discussions with us. If you are happy with our rebuttals/answers, please adjust your scores accordingly. Thank you very much.
Summary: This paper studies the problem of recovering a low-rank matrix from noisy linear measurements of the matrix (i.e. inner products with the vectorization of the matrix). This paper studies the problem at a particular level of generality: - We have adversarial noisy measurements of an ill-conditioned asymmetric matrix and want to recover it by solving a minimizing over all low-rank matrices with respect to a non-smooth loss function. If we remove any of these requirements (noisy, ill-conditioned, asymmetric, non-smooth loss), then this problem has been solved by prior work. They show that a generalization of prior algorithms succeed in recovering low-rank matrices in this setting with an algorithm that converges exponentially quickly (i.e. has an iteration complexity that's logarithmic in the desired precision). Paper is mostly theory with some lightweight experiments. Claims And Evidence: The core claims of the paper are convincing at a glance. I have no particular issues with the claims in the paper, but I also didn't invest the time to dig into their theorems. The paper seems to argue a simple generalization of prior techniques to get a slightly more general result that previously possible using a very believable algorithm that just sorta seems like the right generalization of prior works. The empirics are nice, but not sufficient to make me feel comfortable recommending this algorithm to practitioners. Paper is mostly theory, but that feels mostly solid. Some mild gaps in what they discuss, but nothing severe. In short, I'm comfy accepting the paper. Methods And Evaluation Criteria: The paper has an empiric section, but it's somewhat lacking. They propose this simple method for low-rank matrix recovery, but only apply it to the simplest toy case where our underlying ground truth matrix is the product of "two $n \times r$ random matrices" [Line 381, left], we make iid Gaussian measurements of our matrix, and the noise is randomly positioned with uniform at random (admittedly very large) values. This is a very nice starting point for analyzing an algorithm, the sorta sanity check you want that this method actually works as theorized, but doesn't show that this method works at all for anything more realistic. I would really have wanted to see a real-world matrix that's operated upon. Perhaps a matrix completion or a robust PCA application, as they point out as potential applications in the first paragraph of their introduction [Lines 42-48, left]? Further, the authors only use one algorithm as a benchmark, despite the fact that they cite many prior works as algorithms that try to solve low-rank matrix recover (see e.g. table 1 on page 3). The empirics are good enough to make me believe that the proposed algorithms has the potential to work, but is not enough to make me believe that the algorithm actually works well. I'll note that I'm by no means an expert in the space of low-rank matrix recovery, so I'm not sure what the best benchmarks are, but something involving real world matrices should be possible. **I would love to hear if other reviewers have other benchmarks in mind.** Plus, given that they already have some experiments running, conceptually shouldn't be so hard to get running on other real-world matrices. Theoretical Claims: I didn't check the details of the proofs at any stage. I just followed the vague intuitions throughout the paper. Nothing here seems like a really bold surprising claim. Their main theoretical result (theorem 5.4) is a pretty reasonable-looking claim, saying that if we start from a good initialization point then their iterative method converges to an extremely good solution. They also say, perhaps a bit too vaguely, that a prior work gives an easy method to produce a good-enough starting point. I have some minor peeves with the way the paper writes some parts of it's theory (such as sorta obfuscating the fact that theorem 5.4 has a mild dependence on conditioning, which is totally cool and fine, but obfuscated). I exhaust my little peeves in my list of typos and recommended edits. I don't really have concerns about the theoretical correctness broadly though. Experimental Designs Or Analyses: N/A (basically, this is the same as "Methods and Evaluation Criteria" imo) Supplementary Material: No. Relation To Broader Scientific Literature: This is maybe my sticking point for this paper. The results seem essentially correct. The experiments seem fine. The improvement over the prior work is that this paper proposes a way to satisfy 4 directions of generality as the same time in low-rank matrix recovery: 1. (Bounded) Adversarial noise 2. Asymmetric matrix 3. Ill-conditioned matrix 4. Non-smooth loss function The proposed algorithm is intuitively very natural following from how the authors describe the prior work. I don't want to undersell the fact that it does require novelty, but isn't like a wild idea or anything either. It's more like figuring out the right way to generalize the prior ideas. This makes me think about that annoying word "marginal". This work is proposing an algorithm that's theoretically capable in a very general setting, but hasn't been empirically demonstrated to work amazingly on real-world data (it's all very synthetic data). So I can't sell the importance of the paper on it's empirical prowess, it's instead got to mostly rely on its theoretical prowess instead. The generality is nice, but just how big of a deal is it? Not sure, and I think that at the end of the day it does cross the line to being significant enough. But it is a bit of a toss-up to me. Essential References Not Discussed: None in particular. Other Strengths And Weaknesses: The writing in the paper is really nice. Like I'm super enjoying the fact that, as I read the paper, I'll wonder "oh, but does this require us to know the value of the loss function at the true optima" and the paper will have a remark saying something like "in practice, we don't know this value of the loss function at the true optima, so we use XYZ in practice instead and that works great", and I greatly enjoy this. This pre-empting of the natural questions, and having good answers to these questions, made this paper particularly fun to read. I will say they dropped the ball a little bit in section 5.4, where they almost entirely lack this flavor text. Other Comments Or Suggestions: This is a list of typos and recommended edits. Ignore whatever you want to ignore. 1. [Eqn 5] This general optimization problem is fine, but should it optimize over rank(X) \leq something? 2. [Line 172, left] Idk if the language of A as a measurement matrix is standard. If so, ignore more. But if not, then maybe call it a linear map from R^{m by n} to R? Or call it a matrix, but apply A to vec(X*)? Or specify that A_i(X*) = <A_i, X*> via the trace inner product? 3. [Line 193, right] Remark 4.1 is great, glad it's included! 4. [Line 258, left] Naively substituting terms in, we're assuming that $\lambda = \frac1{20}$. 5. [Eqn 15] I think this is supposed to be an assumption on the initial iterate F0, not an implication from the definition of $\varepsilon$? Needs to be rewritten here to clarify this. 6. [Thm 5.4] What is bar lambda? I see that lambda is the regularization inside the preconditioner, but idk what bar lambda is. 7. [Thm 5.4] Add a line which says something like "In particular, $t = O(\chi^2 \log(c_\chi \sqrt{\|\|X_*\|\|}))$ iterations sufficed to have ||L_t R_t' - X_*||_F \leq \eta dist(F0, F*)", or whatever the a good final guarantee is. 8. [Thm 5.4] Why assume WLOG that the r^th singular value is 1? Instead, just write the condition number of X instead of the operator norm. It feels more honest imo, unless there's another reason to normalize this way in the body of the paper. 9. [Thm 5.4] What is C_\chi? 10. [Line 253, right] Is "liner rate" a typo? 11. [Line 255, right] What rate in 5.4 is this? Is this the expresson on the RHS of dist(Ft, F*)? Of ||Lt Rt' - X*||? 12. [Line 280, left] Wonderful! Chi grows with d! Can we make this more rigorous with a specific example? 13. [Defintion 5.8] So here, ||X||_F is the L2 norm of vec(X). And A(x) is a vector whos entries are inner products with vec(X). So, ||A(X)||_1 = ||B*vec(X)||_1 for some matrix B that depends on A. Then, we see this mixed-norm RIP property is the exact same as the 2->1 norm of the matrix B [See e.g. "Estimating the matrix p → q norm" by Guth et al]. Not sure this is interesting or superbly relevant, but it's an interesting connection to a term in classical numerical linear algebra. 14. [Fig 1] I think this! Very simple and nice visual! Write out what the blue and red values actually are maybe? Just a ballpark is fine! Or their ratio, for the sake of Prop 5.9! 15. [Fig 2] Also very nice! Simple clean story about OPSA being slower with overparameterization, but much less vulnerable than ScaledSM! 16. [Corols 5.10, 5.13] You've overloaded the meaning of $\varepsilon$ between here and Thm 5.4. I suggest you change $\varepsilon$ in Thm 5.4 to like $\Delta$ or $\eta$ or just something else. Also, you need to acknoledge the (logarithmic) dependence on the quality of the starting iterate via like C_\chi and \kappa. 17. [Section 5.4] Add some words before sending me straight into a definition. For instance, tell me a bit bout the model of noise you use, and if it's inspired by any prior work's formulation. Also, tell me what Definition 5.11 is really trying to encode. 18. [Figures 2 and 3] These figures seem to have basically the same information, just slightly different input parameters. Replace one of these. Questions For Authors: What's the recommended way to handle not knowing the true rank r a priori? Doubling? There a goodness of fit metric to concern outselves with? What interesting loss functions fit assumptions 5.1 and 5.2, but not the non-rank-restricted versions of these loss functions? I.e. how helpful is it to assume that X1, X2, X are rank <= d? What's going on with Eqn 15? Is this supposed to be a theorem proved about F0, or an assumption on the quality of our first iterate? Remark 5.6 points out that the condition number does matter for the initial iterate's assumption. This should be acknowledged sooner. What does Defn 5.11 really try to encode? This is a weird expression to read in Equation 17. We compare the L1 to L2 norm as done in Defn 5.8, but there's this splitting of A across S and the complement of S, but the norm is outside of the minus sign. It's all a bit odd at a glance. Lines 370-373 point out that Corol 5.13 shows the rate of convergence depends on properties of the measurement matrix. Why is this mentioned here? Wasn't this also the case in Corol 5.10? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to carefully review our paper and for providing such valuable comments and constructive feedback. Below, we present our detailed, point-by-point responses to each of the reviewer’s comments and concerns. > Claims & Evidence Thank you for your positive words and for finding our theoretical results to be solid. Our contributions are mainly theoretical, and hence, the experimental part focused on corroborating the theoretical results. However, the proposed algorithm can also offer significant benefits in addressing real-world problems! Please see further details below on additional experiments conducted on real data. > Methods & Evaluation Criteria To address reviewers’ concerns, we provide additional results on video background subtraction, which is a classic application for robust matrix completion. We test with two well-known video datasets and compare the performance of the proposed OPSA to a state-of-the-art robust PCA method, which requires exact rank and full observation. The visual results can be found [here](https://ibb.co/5XVfRNQf) and [here](https://ibb.co/hJsTH8qm), showing that OPSA is competitive against the exact-rank robust PCA algorithm, despite that the former is given a rough overestimate of the rank and 30% of the observations. > Theoretical Claims Thank you for your positive evaluation of our theoretical claims! In fact, Theorem 5.4 shows a linear rate of convergence that is ***independent of the condition number***. However, the **initialisation condition does indeed depend on the condition number (as also noted in Remark 5.6)**. We will further emphasize this point in the revised version of our paper! > Relation To Broader Scientific Literature Again, we would like to thank the reviewer for recognising the novelty of our theoretical contributions. Our work essentially addresses an open problem in the low-rank matrix factorization literature. As the reviewer recognizes, theoretical challenges led to non-trivial technical contributions. Our experimental section was designed to corroborate the theoretical results, e.g., showing a linear rate of convergence in the rank overparameterized regime, etc. We understand that this might have caused a misunderstanding as to the practicality of the proposed algorithms. We hope that the additional real-world experiments we provided (please see details above) addressed the reviewer's concerns. > Other Comments/Suggestions In the revised paper, we have fixed typos and provided clarification based on comments/suggestions! Below, we analytically respond to some of the key points raised here: 6 -$\bar{\lambda}$ is an auxiliary variable used for the sake of deriving a simple form for the rate of convergence. $\bar{\lambda}$ shows up in the expression we use for $\lambda$ i.e., $\lambda =\frac{\|X_\ast\|_{op}}{c \bar{\lambda}}$. This definition of $\lambda$ ameliorates the derivation of the rate of convergence given in Theorem 5.4. 8 - The reason for normalizing this way is to provide some useful insights into the rate of convergence. As also mentioned above, the rate of convergence does not depend on the condition number, however, the initialization condition does depend on it. 9 - This is $c\cdot \chi$, we will add $\cdot$ to make it clear. 11 - This is the general expression for the rate for any $\lambda$ for the RHS of $\mathrm{dist}(\mathbf{F},\mathbf{F}_\ast)$. > Questions For Authors: - Handling of not knowing the true rank a priori. In practical settings a safe overestimate of true rank but not too large $d$ would work! Of course, this initial guess is problem-dependent and requires prior domain knowledge. Note also that OPSA is not too sensitive to the selection of $d$ (please see Fig. 2), allowing for a range $d < 2r$ without paying a significant price, when it comes to the speed of convergence. > What interesting loss functions fit Assumps 5.1 & 5.2... Convex losses combined with linear maps that satisfy RIP (e.g., our objective for the matrix sensing problem) would satisfy Assumptions 5.1 and 5.2 for restricted rank but not for non-rank-restricted versions. The reason for that is that RIP properties would normally hold for low-rank $X$. > What's going on with (15)?... This is just an assumption! We have revised the statement to make this clear. > What does Def 5.11 really try to encode?... The S-outlier bound property has been used in the robust low-rank matrix recovery problem in prior works, e.g., (Tong et al, 2021a). It actually encodes a property of $\mathcal{A}$ that allows restricted sharpness condition to be satisfied in matrix sensing problems in the presence of outliers. We would like to refer the reviewer to a detailed derivation of this condition, as a natural generalization of RIP can be found in [Char21]. [Char21] Charisopoulos, et al. "Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence." Found. of Comput. Math., 2021.
null
null
null
null
null
null
Protriever: End-to-End Differentiable Protein Homology Search for Fitness Prediction
Accept (poster)
Summary: Protriever is an end-to-end differentiable framework for augmenting performance on downstream applications (e.g., fitness prediction) of language models using vector-based retrieval. The model consists of 2 trainable components, the retriever and the reader, and one static vector index. The retriever learns to select from the index a set of support sequences for a query sequence while the reader learns to reconstruct the query from only the support sequences (by autoregressive decoding). In so doing, the reader learns a joint distribution over query and support sequences. After training, protriever is applied to a fitness prediction task. Claims And Evidence: The claims/result on the architectural choices are well-justified and convincing. The claim that the training procedure improves fitness prediction is supported by Tables 1 and 2, which seem to show a significant improvement over the tested baselines in only certain settings. Methods And Evaluation Criteria: The evaluation criteria is sound -- the ProteinGym dataset is a well known benchmark for fitness prediction. Theoretical Claims: I would like for the authors to provide a reference or brief justification of the claim in sec 3.2 line 245 that the theoretical optimum of the EMDR loss is "a degenerate distribution that assigns all probability mass to the single sequence maximizing the language model’s likelihood of generating the correct output". Experimental Designs Or Analyses: I did check the soundness/validity of the experiments. Supplementary Material: I did review the supplementary sections, paying closer attention to the vector quantization method description. Relation To Broader Scientific Literature: The work relates to ongoing advancements in sequence retrieval for protein property prediction, such as ProtEx [Shaw, et al '24] and builds off seminal work on retrieval augmented generation (RAG, REALM) referenced in the text. Essential References Not Discussed: - Alongside the reference to MSAGPT, the authors may also refer to EvoDiff [Alamdari, et al '23]. - On the subject of structure-aware embeddings models (sec 5, line 410), the authors may also refer to TM-Veec [Hamamsy, et al '24, published in Nat. Bio]. Other Strengths And Weaknesses: Strengths The paper is well-written, clear, and the architectural choices are well-motivated and justified. Weaknesses The article emphasizes architectural decisions but the experimental results are not convincing. The model does not improve performance significantly or across-the-board. The application to fitness prediction has a well-defined benchmark, but the results do not seem to demonstrate any marked effectiveness of the model for this task. I am not sure what Table 3 tells us, since the Spearman values are generally close. Which loss function was used in the end and why? Other Comments Or Suggestions: - Which assays were used during the ProteinGym evaluation? Questions For Authors: There are many other potential applications of this model besides fitness prediction where there doesn't seem to be a clear demonstration of improved performance. I am curious whether the authors are willing to explore any other applications which could separately test the retriever and reader's capabilities. For example, both the reader and retriever can be used for function prediction (by means of homology transfer in the latter). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **C1: EMDR optimum** The EMDR loss function is defined as: $$ \mathcal{L}_{EMDR} = -\log \left[ \sum_k p^{LM} ( \mathbf{q} \vert \mathbf{d}_k ) p^{RETR} ( \mathbf{d}_k \vert \mathbf{q}) \right] $$ (We are experiencing issues with longer equations in OpenReview's Markdown, where subscripts appear to break the rendering - so we have resorted to put _LM_ and _RETR_ as superscripts.) Minimizing this loss is equivalent to maximizing the weighted sum inside the logarithm. Given that $p_{RETR}$ must form a valid probability distribution (summing to 1), this becomes a constrained optimization problem. The solution to such a problem is straightforward: to maximize a weighted sum $\sum_k w_k p_k$ where weights $w_k$ are fixed and $\sum_k p_k = 1$, the optimal strategy is to assign $p_j = 1$ to the term with the highest weight $w_j$, and $p_k = 0$ to all others. In our case, the weights are the language model probabilities $p_{LM}(\mathbf{q} | \mathbf{d}_k)$, so the optimal retriever distribution will place all probability mass on the document that yields the highest language model score. This is intuitive when viewing the retriever as allocating a fixed "budget" of probability mass to maximize the weighted sum — the best strategy is always to invest the entire budget in the option with the highest return. **C2: Additional References** Thank you for the suggestion — both are relevant and will be included in the revision. **C3: Validation Set** For the validation set, we have used the same set as previously used in Tranception (Notin et al) and PoET (Truong et al). It is composed of the following assays: - BLAT_ECOLX (Jacquier et al., 2013) - CALM1_HUMAN (Weile et al., 2017) - CCDB_ECOLI (Tripathi et al., 2016) - DLG4_RAT (McLaughlin et al., 2012) - PA_I34A1 (Wu et al., 2015) - RL40A_YEAST (Roscoe et al., 2013) - SPIKE_SARS2 (Starr et al., 2020) - TPOR_HUMAN (Bridgford et al., 2020) - Q2N0S5_9HIV1 (Haddox et al., 2018) - SPG1_STRSG (Olson et al., 2014) We will include the above clarification in the revision. **C4: Performance impact and breadth of tasks supported** As noted by the two other reviewers, the core practical benefit from our suggested approach lies in the significant speedups over standard retrieval methods such as multiple sequence alignments (MSA), without degrading the fitness prediction performance. We point the reviewer to the response to C2 from reviewer 2keu and C1-C2 from reviewer Bbfm for a more detailed discussion of our contributions there. Note that MSAs have been a cornerstone of computational biology for several decades and, as such, they have been extensively optimized to support tasks like the ones we focus on in this work. Thus, the types of improvements we present in this work are both non-trivial and of significant importance in practice, where these speedups matter (C3, Bbfm) We hope this addresses the concerns of the reviewer in terms of performance improvements. As for the scope of applications our paper covers, we would like to emphasize that ProteinGym encompasses 200+ assays across a wide range of protein functions (e.g,. thermostability, binding, catalytic activity, protein expression) and that, as such, our current experimental scope is already fairly broad and diverse. As noted by the reviewer, there are other protein-related applications where homology is key (e.g., function prediction, tertiary structure prediction). While an adaptation of our proposed method to these other tasks is beyond the scope of this paper, we argue that this potentiality is yet another strength of our proposed model and believe the ideas presented here will be conducive to many subsequent works that explore the benefits of end-to-end retrieval in protein modeling. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal and clarifying the point on the EMDR loss. I would however like to note that regarding C4 paragraph 2, although the ProteinGym contains data from 200+ assays surveying a wide variety of selection types, the selected validation set in C3 is heavily biased towards organismal fitness assays (7 organismal fitness [5 releated to growth, 1 amoxicillin resistance, 1 complementation], 3 binding; this was calculated by loading the DMS_substitutions.csv and DMS_indels.csv available on the ProteinGym website, searching for the strings listed in rebuttal 2keu-C1, and counting the selection type occurrences). For that reason, it is misleading to me to claim the breadth of scope as including catalytic activity, expression, and thermostability. That said, since this validation set is a well established benchmark in the literature (Notin et al, 2022 and Truong et al, 2023) and the focus of the article is on speedups that do not impact performance, I will raise my rating to an accept. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for these additional comments. To address your last point of feedback, we further expanded the time-performance tradeoff analysis to the entire ProteinGym benchmark (217 assays). We obtain the following results, confirming the prior conclusions on the benefits from the proposed differentiable retrieval approach: | Retrieval Method | Retrieval time (s) | Avg. Spearman PoET (single fwd pass) | Avg. Spearman PoET (full ensembling) | |------------------|----------|--------------------------------------|--------------------------------------| | Protriever | 0.0046 | 0.406 | 0.436 | | MMseqs2 | 16.860 | 0.411 | 0.440 | | MMseqs2-GPU | 0.613 | 0.400 | 0.427 | | JackHMMER | 2501 | 0.412 | 0.443 |
Summary: The paper proposes Protriever, an end-to-end differentiable framework for protein sequence modeling. This approach unifies two steps: protein homology retrieval and downstream modeling tasks. This is done by using a vector similarity search to retrieve homologous protein sequences. The authors train Protriever end-to-end for a protein fitness prediction task. They evaluate it on the ProteinGym benchmark for zero-shot fitness prediction. Protriever achieves performance on par with traditional MSA-based approaches. The benefit is on the speedup --- it's reported to be 30-100x faster compared to existing methods. Claims And Evidence: The primary focus of the paper is on the speedup. The authors report the performance (which is indeed on par with the other methods). I think this focus is reasonable and, therefore, I won't comment on performance differences and focus mostly on the speedup part. Comments on the speedups: - Because we're interested in the speedup, I'd like to ensure that the results of the speedup are not influenced by any other factors. What is the benchmarking methodology for this time? What hardware was used? What is the dataset size? Configuration parameters? - On Figure 2, can you report more queries than 1 and 100? Why these numbers? Can you also report uncertainty and confidence intervals? - Because the primary benefit seems to be the speedup, could you tell me and report the end-to-end performance timing for complete prediction tasks? Right now, the speedup is only for the retrieval time. I think it could be alright if the training time is much larger for your method than other methods (i.e. I think you can make the case that retrieval time is more important), but I'd like to have this analysis. So: (i) can you compare the end-to-end time?; (ii) if the end-to-end time for your application is larger with one retrieval, how many retrievals would you need to have for you to break-even time-wise? - On a more practical level, could you tell me a bit more about what applications require such speedups (i.e. are there any reports showing that speedups are issues people are trying to solve?). I'm happy if these are non-academic reports too if that helps. Other comments: - On ablation studies. The ablation studies (Table 3, Table 4) seem to have very minor differences between different values tested. Could you report any uncertainty intervals (e.g. standard deviation over different splits)? It's hard to see whether the values are significant. - The statistical significance angle is true elsewhere too. For instance, you highlight Protriever's advantage in low MSA depth regimes (0.365 vs. 0.352), but this small difference may not be statistically significant, and the paper doesn't provide statistical validation. Because you are reporting average Spearman rank correlation, I don't have good intuition whether such a difference means anything -- to me this looks like noise. - You claim that Protriever is "architecture-agnostic," but this appears somewhat overstated as the flexibility to substitute different components is common to many deep learning frameworks. I appreciate the separation of retriever and reader components in your design, but it's somewhat of a trivial point that these could be any architectures. The same is true for the task. I'd recommend either removing this framing or providing empirical evidence across multiple tasks and architectures beyond fitness prediction to make this more of an important point in the paper. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally appropriate. ProteinGym is a suitable and standard choice in protein fitness prediction. While most things are clear, here are a few questions from my end that it would be great to get answers to: - One issue is that the paper evaluates only on single amino acid substitutions and doesn't test more complex mutation scenarios (e.g. deletions, insertions). - Another issue -- repeating the comment from before -- is that the authors focus only focus on a single prediction task, so it limits your claim that this is task-agnostic (even though I understand what you mean by this). - It would be also great if you could expand your writeup to explain what you do (e.g. training procedures) instead of simply stating that you use PoET's dataset and sampling approach (which makes it hard for a reader to follow what's happening). - Also, do you have any intuition why different sampling strategies work better than others? Theoretical Claims: No proofs or theoretical claims. Experimental Designs Or Analyses: I looked at the experimental design. The issues I've noticed are already included above. Supplementary Material: No. Relation To Broader Scientific Literature: Protriever builds upon conditional protein language models like MSA Transformer (Rao et al., 2021) and PoET (Truong Jr & Bepler, 2023), which condition on multiple related sequences. Unlike MSA Transformer, which requires pre-aligned sequences and cannot model insertions/deletions, Protriever follows PoET's approach of handling unaligned sequences, but adds the innovation of AIDO which sequences are most useful during training. Protriever adapts RAG techniques from NLP (Lewis et al., 2020) to protein modeling, similar to concurrent work like RSA (Ma et al., 2024) and AIDO.RAG (Li et al., 2024). Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Some strengths I haven't mentioned before: (1) the framework's ability to change retrieval databses at inference time seems to be practically useful; (2) Generally, it seems to integrate multiple disciplines quite well. The EMDR and PDist training approaches represent interesting adaptations of NLP retrieval methods to the protein domain, with EMDR showing slightly better performance in the ablation studies. The Fusion-in-Decoder architecture is a sensible choice that handles the computational complexity challenges of processing multiple sequences effectively. (3) it seems to address clear problems within MSA. The end-to-end differentiable retrieval framework for protein sequences represents a novel technical contribution that hasn't been previously explored in this domain. The drawbacks are largely those that relate to the questions I had asked previously. Other Comments Or Suggestions: N/A ## Post-rebuttal update: Increased 3 -> 4 based on the reply. Questions For Authors: Asked above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **C1: Speed-up: inference analysis** Instead of reporting per query times from the MMseqs-GPU paper from Kallenborn et al. we move to rigorously benchmark our method, MMseqs2 in CPU and GPU modes. We randomly sample UniRef50 sequences to create 5 sets of sizes 1, 10, 100, and 1000 each. For each query size, we report the mean and standard deviation across the 5 random datasets (after taking the median over 5 repeated runs per dataset). We use the same MMseqs2 parameters as described in Kallenborn & Chacon et al. to search against ~62 million UniRef50 sequences: increased sensitivity for the CPU-version (s=8.5), prefilter-mode 1, E-value cutoff 10000, and 4000 max sequences. For the GPU-version, we use flags ‘--gpu 1’ and ‘--gpu-server 1’, and the same E-value cutoff and max number of sequences. Our method and MMseqs2-GPU are run on identical hardware with a single L40S using a single thread. MMseqs2 uses 20 threads. The Protriever faiss index is pre-trained on all sequences and does not use sharding for direct single-GPU comparison. Search time per query (in seconds) shows that Protriever remains orders of magnitude faster across query sizes | Method| N=1| N=10| N=100| N=1000| |:-|:-|:-|:-|:-| | MMseqs2 (k-mer)| 28.587 ± 5.7237 | 17.771 ± 2.9081 | 15.010 ± 1.4492 | 15.567 ± 1.0836| | MMseqs2-GPU| 0.637 ± 0.3079| 0.532 ± 0.1285|0.493 ± 0.0653|0.508 ± 0.0246| | Protriever (faiss) | 0.007 ± 0.0005| 0.005 ± 0.0001| 0.005 ± 0.0001| 0.005 ± 0.0001| **C2: Speed-up: training analysis** Embedding the full UniRef50 database with ESM-2 (35M) takes less than 2 hours on a single GPU when using efficient flash-attention implementations. A full training and inference time comparison with other retrieved methods is practically not possible, each requiring dedicated processing steps (MMseqs2 costly database indexing, traditional homology methods compute expensive MSA profiles). Faiss's advantage is that we can export a small index that represents a “cached” result, which can easily be downloaded by end users. Fair comparisons should focus on inference time, where our method demonstrates clear superiority, as shown above. In response C1 to reviewer 2keu, we have included a table showing the performance and retrieval times of competing methods on 10 validation sets from ProteinGym. The faiss-driven retrieval time is several orders of magnitude faster, such that substituting it with MMseqs2-GPU would increase the computation time significantly. **C3: Applications benefiting from retrieval speedups** There are many applications that could significantly benefit from the architecture we present in this work. To name a few: - Evaluating the effect of missense and indel mutants on the full human proteomes entails extracting 20.000+ MSAs, one for each gene, which is time and compute heavy. - Training structure prediction models such as AlphaFold or OpenFold. The Open Protein set that was used to train the latter is for instance composed of 16 million MSAs. Refreshing this set as underlying data sources keep being updated is intensive. - Rapid retrieval is similarly useful in the inference setting of structure prediction models that leverage homologous sequences, where the arguments made in Kallenborn et al. similarly apply here. **C4: Statistical significance** We calculate non-parametric bootstrap standard errors (BSE), methodology from ProteinGym. As results in Tables 1 and 2 are reported over large number of assays they are robustly statistically significant (low MSA depth BSE<0.001). For the the standard estimate for EMDR vs. PDist as computed over the validation set only, the statistical significance is of 0.0072. **C5: Insertion/Deletion** Thank you for the suggestions. We have evaluated our Fusion-in-Decoder on the set of 66 assays in the ProteinGym benchmark. We obtain a score of 0.425 for FiD evaluated with MSA retrieval and 0.422 for FiD with Protriever retrieval. We can observe once again that we match (BSE=0.01) performance of the standard MSA-retrieval approach further demonstrating the versatility of the proposed framework. **C6: Sampling Strategies at training and inference** During training, sequences are sampled with weight inversely proportional to Uni50 cluster sizes to avoid oversampling large clusters (will have more hits at the Uni50 level). We also sample sequences from Uni100 for each Uni50 sequence with weight inversely proportional to the size of the Uni90 clusters they belong to introduce more diversity and not overly represent larger Uni90 clusters. At inference we test out different sampling strategies that combine two sampling strategies: - When sampling Uni100 with Uni90 sequence weights, as in training we make sure to not oversample large Uni90 clusters within each Uni50 representative. - When sampling based on embedding distance, we aim to sample closer sequences at Uni50 level, our retriever is trained to put closest to each query the most useful retrieved documents.
Summary: This paper presents a method for jointly training a homology retrieval model and a conditional language model for fitness prediction. The retrieval model efficiently identifies homologous sequences for a given query protein using efficient vector search. Claims And Evidence: The paper clearly articulates its main claim, and the empirical study aligns well with it. However, it lacks sufficient comparison with the state-of-the-art model, PoET, which is mentioned in the related work. Additionally, while the paper asserts that the proposed method achieves strong performance with improved computational efficiency, the provided empirical studies do not convincingly support this claim. A more compelling demonstration would be a curve plotting computation time (x-axis) against performance (Spearman rank correlation, y-axis), clearly illustrating the model’s advantage in the time-performance trade-off. Methods And Evaluation Criteria: The proposed method is intuitive, though the EMDR retriever loss does not appear to be an EM objective. While this may not be a major contribution of the paper, it would be helpful for the authors to provide an explanation—perhaps in the Appendix—to clarify the validity of this objective. The evaluation criteria follow the standard setting and seem appropriate. Theoretical Claims: No theoretical claims are made in this paper. Experimental Designs Or Analyses: This paper lacks sufficient comparison with the baselines. For example, the SOTA methods in the ProteinGym are ProSST and PoET. ProSST uses structure information, which makes it an unfair comparison with this paper. However, the PoET is closely related, and the performance is better than the performance of this model, which should be compared and discussed. This paper lacks sufficient comparison with relevant baselines. In particular, the SOTA method PoET in ProteinGym is not adequately discussed. PoET is closely related and achieves superior performance. A thorough comparison with PoET, including a discussion of the differences and potential advantages, would strengthen the paper. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: This paper is related to scientific literature about protein fitness predictions. Essential References Not Discussed: The paper appropriately cites essential references. However, as mentioned earlier, a more in-depth discussion comparing the proposed method with the SOTA, particularly PoET, should be included to provide a clearer contextual understanding of its contributions and limitations. Other Strengths And Weaknesses: This paper demonstrates a fair level of originality. While the methods employed originate from prior work in other domains, such as NLP, this is the first study to apply them to protein fitness prediction, marking a novel contribution to this field. Other Comments Or Suggestions: NA Questions For Authors: (1) Why is the proposed method not compared with PoET in Table 1? (2) In the last paragraph of Sec. 4.3, it states: 'Each method was initialized from the checkpoint of the previous method.' Does this mean that the model using the 'ESM sets' strategy was trained based on the checkpoints from the 'Fixed BLAST sets' strategy? If so, is this a fair comparison for evaluating the incremental performance of the different training strategies? I will consider raising my score if the authors address these two questions, particularly the first one. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **C1: PoET / SOTA results** Our suggested differentiable retriever approach can readily be used to augment SOTA sequence-based architectures on the ProteinGym benchmark such as PoET. The framework requires minimal changes, though the code has to be modified to allow each query in the batch to process a variable number of retrieved sequences. PoET leverages additional processing of sequences based on various sequence similarity cutoffs on the aligned sequences. This results in more than 30 forward passes through the model (sequences scored from N to C and C to N, with 15 different sampling hyperparameters). Note that PoET's performance when scoring only one direction with default parameters (context size of 12288 and sequence similarity cutoff of 0.95) on the validation set is 0.425. While we could integrate an alignment step in our framework, which would allow for similar inference results, we present here results of a single forward pass of the model, which we think better represents the true potential of the method. We focus our analysis on the subset of 10 DMS assays from ProteinGym used as our validation set, and obtain the following results: | Assay | Spearman - vanilla PoET | Spearman - PoET w/ Protriever | |-|-|-| | BLAT_ECOLX_Jacquier_2013 | 0.677 | 0.700 | | CALM1_HUMAN_Weile_2017 | 0.201 | 0.208 | | CCDB_ECOLI_Tripathi_2016 | 0.423 | 0.422 | | DLG4_RAT_McLaughlin_2012 | 0.594 | 0.581 | | PA_I34A1_Wu_2015 | 0.527 | 0.526 | | Q2N0S5_9HIV1_Haddox_2018 | 0.541 | 0.518 | | RL40A_YEAST_Roscoe_2013 | 0.420 | 0.477 | | SPG1_STRSG_Olson_2014 | 0.272 | 0.216 | | SPIKE_SARS2_Starr_2020_binding | 0.357 | 0.356 | | TPOR_HUMAN_Bridgford_2020 | 0.447 | 0.395 | | **Average** | **0.446** | **0.440** | We match the fitness prediction performance of PoET obtained with ColabFold (performance difference is not statistically significant; using the same bootstrap standard error (BSE) methodology from ProteinGym, BSE=0.0092) **C2: time-performance trade-off** Thank you for the great suggestion. We compared the timing and fitness prediction performance from 4 sequence retrieval systems (Protriever, MMseqs2 (k-mer), MMseqs2-GPU, and JackHMMER) used in conjunction with our proposed FiD architecture or PoET (as discussed in the prior response). For the MMseqs2 searches, we use the default parameters from Kallenborn & Chacon et al. (see our response on benchmarking methodology B1 to reviewer Bbfm for details). We query each wild type sequence of the validation set against the same UniRef50 database as the one used in the original PoET paper (i.e., the 2023_04 checkpoint) We obtain the following results on the validation set, using either the FiD model or PoET (single inference forward pass) : | Retrieval Method | Avg. Retrieval Speed (sec) | Avg. Spearman FiD | Avg. Spearman PoET | |-|-|-|-| | Protriever| 0.0046 |0.404| 0.440 | MMseqs2| 25.9495| 0.411|0.446 | MMseqs2-GPU| 0.7745| 0.402| 0.435 | JackHMMER| 2645.0| 0.413| 0.449 We observe that Protriever achieves more than 100x speedups in retrieval, while maintaining comparable fitness prediction performance. **C3: EMDR as EM like objective** Our EMDR loss follows Sachan et al.'s "EM-inspired" (we use the same terminology as the authors) approach rather than classical EM. The connection to EM is: 1) Implicit E-step: Latent variables are the relevant documents. We estimate their posterior distribution by computing $p_{RETR}(\mathbf{d}_k|\mathbf{q})$ for retrieved documents. 2) Implicit M-step: We update parameters by maximizing the loss in our paper. The loss is akin to EM's core concept of alternating between estimating latent document relevance and optimizing model parameters. **C4: Training strategies** The ESM set strategy was initialized from the last fixed BLAST experiment checkpoint. These results are not set for comparison but to show how these three strategies of iterative training allow us to improve the performance of our retriever. Fixed BLAST sets provide the biggest boost in performance compared to the base retriever (0.347 to 0.379 for EMDR), and allows for the retriever to focus training on hard-to-distinguish sequences, as these are pulled by sequence similarity BLAST search. The second round of training with ESM sets contains easier to separate sequences and further increases performance (0.379 to 0.385). Finally, we train the reader too, end to end on the learned retriever sets, so that the reader model learns to extend to further sequences (0.385 to 0.404). --- Rebuttal Comment 1.1: Comment: I don’t think it’s fair to compare your approach with a degenerated version of PoET. A more compelling evaluation would be to integrate your method with the full version of PoET. However, the time-performance trade-off clearly highlights your contributions. Given this, I will raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for the additional comments! To your last point regarding the comparison with PoET, we further expanded the analysis to strictly match all inference optimization details from the original PoET paper, namely: - Sampling of sequences based on sequence similarity weights as described in Hopf et al [1]; - Ensembling across all possible combinations of 5 different max similarity cutoffs to the query (1.0, 0.95, 0.90, 0.70, 0.50) and 3 maximum context lengths (6144, 12288, 24576), i.e., 15 combinations total; - For each of these combinations, scoring sequences from N to C and from C to N, and averaging the corresponding scores. We report below an updated performance comparison on the same validation set as before: | Retrieval Method | Avg. Spearman PoET (single fwd pass) | Avg. Spearman PoET (full ensembling) | |------------------|--------------------------------------|--------------------------------------| | Protriever PoET | 0.440 | 0.468 | | Vanilla PoET | 0.446 | 0.470 | In that setting, we also find that Protriever PoET matches the fitness prediction performance of vanilla PoET (bootstrap standard error = 0.0071), while being several orders of magnitude faster to retrieve the set of homologous sequences as discussed before. Please let us know if you have any outstanding comments. We sincerely thank you for your valuable input on the manuscript. [1] Hopf, Thomas A. et al. “Mutation effects predicted from sequence co-variation.” Nature Biotechnology 35 (2017)
null
null
null
null
null
null
null
null
Retrieval Augmented Time Series Forecasting
Accept (poster)
Summary: The author introduces retrieval-augmented methods into the time series forecasting problem, proposing Retrieval-Augmented Forecasting of Time-series (RAFT). During the forecasting process, the method retrieves the most similar historical windows from the training set to predict future data. The RAFT method has achieved favorable results on multiple datasets. Claims And Evidence: The article suggests that retrieval windows of different scales provide information at varying levels, such as local or global information. However, as mentioned in the article, when performing retrieval at different scales, RAFT scales the retrieval dataset, k, V, and input x, resulting in the sequences involved in the retrieval process corresponding to the same positions in the original sequence. It is unclear how this processing captures information at different scales. Furthermore, the paper does not include an ablation study to validate this claim, which may lead to confusion. I suggest that the authors conduct corresponding experiments to provide empirical evidence. Methods And Evaluation Criteria: The proposed method and evaluation criteria are appropriate for the time series forecasting problem. Theoretical Claims: The theoretical claims are basically correct. Experimental Designs Or Analyses: 1. The multiple periods used in RAFT, as mentioned in the "Claims and Evidence" section, lack experimental validation. 2. In Section 5.2, the authors compare RAFT with and without the retrieval module to demonstrate the benefit of retrieval for rare events. Since retrieval augmentation inherently aims to improve forecasting accuracy, it is expected to enhance results. To better highlight the retrieval module’s ability to capture rare patterns, I suggest the authors conduct experiments on carefully synthesized datasets to compare RAFT against strong baselines. 3. Similarly, in Section 5.3, I recommend evaluating different baselines on the same dataset to substantiate the claim. Otherwise, comparing RAFT with and without retrieval alone may not be meaningful. 4. Section 5.4 states that retrieval is also effective for Transformer-based models and evaluates AutoFormer. Could the authors provide more details on how the retrieval module is incorporated into AutoFormer? Supplementary Material: Yes, I have reviewed the supplementary material. Relation To Broader Scientific Literature: There is a strong connection to retrieval augmentation in NLP. Essential References Not Discussed: Not at all. Other Strengths And Weaknesses: Weakness see "Experimental Designs" Other Comments Or Suggestions: 1. Since setting the stride to 1 when obtaining K and V may lead to high computational complexity, could increasing the stride be beneficial? How would it impact the results? 2. During training, if the input sequence x is located in the middle of the training dataset S, does the retrieval module search for sequences both before and after x? Questions For Authors: See above comments. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments. Please see our point-by-point response. > Claims and Evidence & W1. Rationale and ablation study of multiple periods in retrieval Thank you for raising this important point. First, we would like to clarify that our model performs retrieval across multiple periods (scales) and does not retrieve segments from identical positions across different periods. Instead, it retrieves segments at varying positions for each period based on their similarity to the input segment. Perhaps this misunderstanding arose because Figure 3 depicted keys and values from identical positions across multiple periods. In the revised manuscript, we will update Figure 3 to show different positions of retrieval in each period and revise the corresponding explanation on lines 193–197 of the right column. In addition to our response above, we conducted new ablation tests to empirically validate the effectiveness of multi-period retrieval by comparing performance with and without multiple periods. This new result is shown in the table below, which shows multi-period retrieval shows positive impacts for most datasets. We hope this answers your question and we will be happy to add these findings in the revised manuscript in Appendix C.2. ||ETTh1|ETTh2|ETTm1|ETTm2|Electricity|Exchange Rate|Traffic|Weather| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |RAFT|0.367|0.276|0.302|0.164|0.133|0.091|0.378|0.165| |Without Retrieval|0.379|0.282|0.306|0.167|0.143|0.089|0.41|0.182| |Retrieval With 1 Period|0.369|0.276|0.303|0.164|0.133|0.088|0.379|0.168| > W2, W3. Baseline comparison in Section 5.2 and 5.3 Thank you for the suggestion. As recommended, we have conducted additional experiments by comparing RAFT against strong baselines (represented in Table 1) on carefully synthesized datasets, extending our analyses presented in Sections 5.2 and 5.3. The comparison with the four strongest baselines selected based on winning ratio is summarized in the table below. Our results demonstrate that, in the experiments from Section 5.2, the proposed method showed comparable overall performance to the baselines but outperformed them in capturing rare patterns (e.g., occurrences = 1). In the Section 5.3 experiments, our method demonstrated superior performance in modeling less correlated patterns compared to all other baselines. We will include these new experimental results in the revised manuscript. |Pattern Occurrences (Section 5.2)|1|2|4| |--|:-:|:-:|:-:| |TimeMixer|0.236|0.217|0.228| |TimesNet|0.228|0.197|0.192| |MICN|0.228|0.233|0.203| |DLinear|0.264|0.255|0.250| |RAFT without Retrieval|0.259|0.231|0.234| |RAFT with Retrieval|0.221|0.206|0.213| | Pattern Occurrences (Section 5.3)|1|2|4| |-|:-:|:-:|:-:| |TimeMixer|0.286|0.230|0.225| |TimeNet|0.245|0.188|0.194| |MICN|0.254|0.245|0.245| |DLinear|0.318|0.206|0.280| |RAFT without Retrieval|0.269|0.265|0.189| |RAFT with Retrieval|0.185|0.182|0.159| > W4. Details of implementing RAFT in AutoFormer Rather than modifying the internal Transformer architecture to integrate our retrieval module, we directly appended retrieval results to AutoFormer’s predictions at the final stage. Here are some details: AutoFormer divides the input patches into trend and seasonal components, processes them separately, and combines them by addition at the final stage. Our retrieval results, $\sum_{p \in \mathcal{P}}{g^{(p)}(\tilde{\mathbf{v}}^{(p)})}$, are added at the final stage of AutoFormer. We will clarify these details in our revised manuscript. > Suggestion 1. Impact of stride on computational complexity and performance We agree that searching patches in extremely long time-series can be computationally intensive. To address this, we had introduced increasing a stride into the sliding window approach used during retrieval in Appendix D. Our experiments demonstrated that increasing the stride substantially decreases computational costs with minimal impact on performance. Results show that until stride 8, the performance loss is less than 5% while we could decrease the computational cost by 89%. As mentioned in our response to Reviewer 1 (W1), our method can be easily scalable to extremely long time series. > Suggestion 2. During training, if the input sequence x is located in the middle of the training dataset S, does the retrieval module search for sequences both before and after x? To prevent information leakage during training, any patches overlapping with the input sequence x were excluded from the retrieval candidates. After removing these overlapping patches, the retrieval module utilized patches from both before and after the sequence x. At inference time, we ensured a chronological split between the training and test sets, guaranteeing that the input sequence at test time always occurs after the entire training dataset. Therefore, there is no overlap, and the retrieval module leverages the full training set.
Summary: This paper proposes the RAFT framework, which leverages retrieval-augmented generation (RAG) to retrieve similar time series patterns and integrate them to enhance future predictions. The effectiveness of the RAFT framework has been evaluated on well-adopted time series benchmarks, with comparisons against state-of-the-art models. Claims And Evidence: Yes, the claims are well supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the evaluation criteria make sense for the problem. Theoretical Claims: There is no theoretical claim in the paper. Experimental Designs Or Analyses: The design and validity of the experiment are well-founded. Supplementary Material: Yes, I checked the code, and it is well-structured and clear. Relation To Broader Scientific Literature: This paper will have a broad impact on real-life applications of time series prediction models. It paves the way for applying retrieval-augmented generation (RAG) in the time series domain. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper is well-written and clear, with a concise results comparison and convincing visualization of similar patterns. Well done! Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for acknowledging the novelty and wide applicability of our proposed retrieval method. We plan to further revise our manuscript by including additional experiments and clarifications in the revised manuscript.
Summary: This paper presents RAFT (Retrieval-Augmented Forecasting of Time Series), a method for enhancing time series forecasting models by retrieving relevant “patches” from the training dataset that match the current input pattern. These retrieved patches—subsequent future values corresponding to historically similar inputs—are then treated as an additional signal alongside the traditional model input. The authors demonstrate that RAFT boosts performance over numerous strong baselines (including Transformer-based approaches and simple linear or MLP models) across ten benchmark datasets. They also perform extensive analyses on synthetic data to highlight in which scenarios retrieval offers clear benefits (e.g., when patterns are rare or have low temporal correlation). Main contributions 1. Proposing a retrieval-augmented architecture, specifically designed for single time-series forecasting, leveraging historical key-value “patches” in a memory-like module. 2. Showing consistent performance gains on real-world benchmarks (ETT, Electricity, Exchange, etc.) and describing scenarios where retrieval is particularly beneficial (e.g., repeated rare patterns). Claims And Evidence: * The authors claim that retrieval can help a model avoid “memorizing” rarely repeating patterns in its weights, instead leveraging a near-external memory approach to handle such patterns at inference time. Experimental results on synthetic data indeed show RAFT obtains substantial gains—particularly when short-term patterns reappear rarely and have random, less-correlated structures. * They claim RAFT outperforms state-of-the-art baselines like Autoformer, FedFormer, PatchTST, etc. The tables (Table 1, for example) show RAFT surpassing these baselines on average MSE by a clear margin. Methods And Evaluation Criteria: Method Overview: RAFT uses a sliding window approach to generate “key-value” patches from the historical data, where keys are windows of size L and values are the subsequent F steps. Given a query window, RAFT calculates similarity (via Pearson correlation by default) to find top-m “key” matches, then aggregates the corresponding “value” patches. This aggregated retrieval is appended to the original input for final forecasting via a linear projection. Evaluation: They compare MSE and MAE across multiple standard forecasting horizons (F = 96, 192, 336, 720) on widely used popular datasets. They provide ablations, sensitivity analyses, and comparisons to existing SOTA. Theoretical Claims: This paper is applied. Experimental Designs Or Analyses: 1. The authors systematically test RAFT on 10 benchmarks commonly used in time-series forecasting. They vary the forecasting horizon and measure MSE/MAE. They also provide comparative results for 9+ baselines. 2. They do thorough ablations: e.g., removing the retrieval module, random retrieval, retrieval but no attention weighting, etc., to demonstrate each component’s impact. 3. Hyperparameter search is described (e.g., number of retrieved patches m, temperature for weighting, stride for scanning the dataset). 4. The authors present separate analyses on synthetic data to illustrate “rare patterns” and “less correlated random-walk patterns.” This is a strong demonstration of the retrieval approach’s value in difficult scenarios. Supplementary Material: I examined: 1. The dataset details (Appendix A). 2. Detailed hyperparameter choices (Appendix B). 3. Ablation details (Appendix C). 4. Complexity analysis (Appendix D). 5. Synthetic dataset generation details (Appendix E). Relation To Broader Scientific Literature: 1. Retrieval-based frameworks are used extensively in large language models to incorporate external knowledge. This work adapts that principle to time-series forecasting. 2. It aligns with the wave of “memory” or “nonparametric” approaches to handle patterns not frequently observed, reminiscent of k-NN or kernel-based time-series methods, but integrated into a deep learning pipeline. Essential References Not Discussed: Overall, the references are comprehensive enough. Other Strengths And Weaknesses: Strengths: 1. Strong empirical performance: RAFT outperforms multiple strong baselines across standard real-world datasets. 2. Solid ablation and sensitivity studies: They carefully show how retrieval stride, temperature, offset normalization, etc. affect results, enhancing transparency. Weakness: 1. Scalability: For extremely long series (millions of points), storing and searching patches might be expensive. More discussion on large-scale feasibility would be beneficial. 2. Confounding variables: Potential external or exogenous features (holidays, discrete events, etc.) might also be important to consider or retrieve. Other Comments Or Suggestions: It would be interesting to see more real-world interpretability examples instead of the standard datasets. Questions For Authors: Q: As T grows large (say, 10 million+ data points), any efficient way of retrieval ? Q: Does RAFT degrade gracefully when the time series is short (and thus historical patches are limited)? Are there diminishing returns? Q: Could RAFT be extended to incorporate side-information retrieval from multiple correlated time series or external knowledge for each query patch? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments. Please see our point-by-point response. > W1, Q1. Scalability to extremely long series. Thank you for highlighting this important point. We fully agree that searching patches in extremely long time-series can be computationally intensive. In light of this, we had introduced a stride into the sliding window approach used during retrieval to reduce the number of searched segments in the training data (see Appendix D). Our experiments demonstrated that increasing the stride substantially decreases computational costs with minimal impact on performance. For instance, using a stride of 8 reduces inference time by a factor of 8 (which could be even parallelized) with less than 5% degradation in performance. Another strategy could be to train a simple encoder to project the time segments of training data into low-dimensional dense embeddings and perform the search using cosine similarity (refer to Appendix C.1 - Cosine Similarity with Projection). According to our results presented in the Appendix C.1, this method maintains comparable performance with Pearson’s Correlation (default RAFT setting). This approach can significantly reduce inference-time complexity through precomputing dense embeddings of the training data segments. The entire process can be even parallelized, enabling it to handle datasets of a larger-scale such as billions of data points at inference. We can further accelerate the search efficiency via vector search solution through the approximated nearest neighbor (ANN) technique. We will incorporate this discussion into the revised manuscript. > W2, Q3. Extension to handling confounding variables (e.g., external or exogenous features) Thank you for this comment. Our model can be extended to incorporate external features or correlated time series by training encoders to project external information into low-dimensional dense embeddings within the same embedding space as the input features. One natural extension is to consider additional external time series. In this scenario, one encoder can project external segments into dense embeddings and another encoder can project input features into the same embedding space. When embeddings are aligned within a shared space, retrieval can use cosine similarity to identify relevant external segments for each input query patch. The retrieved external segments can then be used alongside the original input for enhanced prediction and encoders would be optimized jointly during training. This idea is consistent with our ablation study described in Appendix C.1 ("Cosine Similarity with Projection"). We will include this extension and its potential benefits in the revised manuscript. > Q2. Does RAFT degrade gracefully when the time series is short (and thus historical patches are limited)? Are there diminishing returns? It is correct that when the available time series is short, the number of historical patches for retrieval naturally decreases, potentially limiting performance gains. To investigate this phenomenon, we conducted additional experiments comparing our model with and without retrieval, varying the length of the training data from our benchmark datasets and evaluated performance on a fixed test set while adjusting the training data proportion to 20%, 60%, and 100%. Our results indicate that the impact of the training dataset's length on performance gains varies significantly across different datasets. However, our approach consistently outperformed the baseline model, even with limited historical data. We will include these findings and further discussion in our revised manuscript. We will include these findings and further discussion in our revised manuscript. | Dataset | Training data proportion | 20% | 60% | 100% | |:-:|-|:-:|:-:|:-:| | ETTh1 | RAFT without retrieval | 0.590 | 0.444 | 0.379 | | | RAFT with retrieval | 0.562 | 0.428 | 0.367 | | ETTh2 | RAFT without retrieval | 0.251 | 0.255 | 0.282 | | | RAFT with retrieval | 0.260 | 0.255 | 0.276 | | ETTm1 | RAFT without retrieval | 1.492 | 0.692 | 0.306 | | | RAFT with retrieval | 0.975 | 0.684 | 0.302 | | ETTm2 | RAFT without retrieval | 1.364 | 0.608 | 0.167 | | | RAFT with retrieval | 1.312 | 0.603 | 0.164 | > Suggestion. It would be interesting to see more real-world interpretability examples instead of the standard datasets. Thank you for the suggestion to showcase practical use cases with real-world data, which will enhance the interpretability of our model. We will include new interpretability examples using real-world scenarios, such as disease spread data from the COVID-19 pandemic (e.g., https://data.who.int/dashboards/covid19/) in the revised manuscript.
null
null
null
null
null
null
null
null
Active Reward Modeling: Adaptive Preference Labeling for Large Language Model Alignment
Accept (poster)
Summary: This paper aims to enhance the reward model in RLHF. Drawing inspiration from active learning, the authors propose Fisher information-based selection strategies to construct an ideal comparison dataset. The experiments show the effectiveness of the proposed method. ## update after rebuttal The authors' response have addressed my concerns, and I decide to raise my score. Claims And Evidence: Yes Methods And Evaluation Criteria: The proposed method is conceptually sound. However, the authors claim that their approach effectively balances the exploration of the representation space and facilitates informative comparisons between pairs. To substantiate these claims, it is essential to incorporate empirical metrics that quantify these aspects. For example, diversity metrics can be employed to measure the extent of exploration. Theoretical Claims: The paper does not contain proofs. Experimental Designs Or Analyses: No significant issues. Supplementary Material: I have thoroughly reviewed all of the supplementary materials. Relation To Broader Scientific Literature: The primary contributions of this paper pertain directly to the development and enhancement of reward models within the framework of RLHF. Essential References Not Discussed: None. Other Strengths And Weaknesses: Other Strengths 1. The authors conduct experiments using multiple LLMs of varying sizes, demonstrating the method's applicability across different model scales. Other Weaknesses 1. Ambiguity in Main Contributions. The primary contribution of this paper remains ambiguous, as the proposed method appears to be a straightforward application of active learning to the reward model without any significant modifications or innovations. 2. Limited Applicability to the Bradley-Terry Model. The proposed method is exclusively compatible with the Bradley-Terry model, which may restrict its applicability to other types of reward models[1]. This limitation poses concerns regarding the method's flexibility and its potential integration with alternative reward modeling frameworks. 3. Inappropriate Placement of Related Works in Section 3.1. Section 3.1 predominantly serves as a detailed introduction to various related works, which detracts from its placement within the methodology section. 4. Increased Risk of Over-Optimization and Reward Hacking. The proposed method may inadvertently elevate the risk of over-optimization and reward hacking. By fine-tuning the reward model through active learning strategies, there is a potential for the model to excessively optimize for specific rewards, leading to behavior that exploits loopholes or unintended shortcuts rather than genuinely achieving desired outcomes. Other Comments Or Suggestions: None Questions For Authors: 1. What are the key hyperparameters used in this paper, and how were they chosen or tuned? 2. How does the proposed method adapt to scenarios where multiple types of preference data exist? 3. What is the accuracy of the initial reward model on the dataset, and how does it compare to the final model? 4. Have you experimented with datasets other than Anthropic? If so, what were the results? 5. What is the time consumption of the proposed method, and how does it scale with larger datasets or more complex models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We thank our reviewer for their time and effort devoted to improving our paper. We have carefully considered each point of feedback and will provide our point-by-point responses below.** --- ### **P1. Main Contributions** We thank the reviewers for raising the question and reminding us to further highlight our contributions. We would like to note a meaningful machine learning contribution typically involves (1) **improved empirical results** on a relevant task compared to strong baselines and (2) **insights into why the method works**. Our paper proposing active reward modeling meets both: 1) Empirical success – We adopt a well-established approach to a novel setting, outperforming baselines including [1] (ICML’2024) and the SoTA active learning method from [2]. 2) Insight – We highlight the trade-off between exploring representation space and comparing uncertain pairs. While active learning is well-studied, our focus is on **applying a classical method to prompt-response selection for reward modeling**. Classical approaches offer robustness, interpretability, and simplicity—our method leverages these strengths while achieving SoTA performance. ### **P2. Applicability beyond the BT Model** We acknowledge the reviewer’s concern that our method is designed around the BT model. The reference [1] in the reviewer’s original review was missing – if the reviewer can provide it, we’d be happy to discuss it further. While improving BT is worthwhile, it remains a widely used framework for modeling human preferences due to its simplicity. Given its role in RLHF workflows, improving sample efficiency within this framework is valuable and lays the foundation for further extensions. Our approach—leveraging last-layer features and optimizing Fisher information (FI) —is also compatible with other neural network-based statistical models, though evaluating broader applications is out of the scope of this paper. ### **P3. Related Works in Sec 3.1** We thank the reviewer and have moved them to the related work section in our revision. ### **P4. Over-Optimization and Reward Hacking** We understand the reviewer’s concern to be that active learning may lead to overfitting. Indeed, poorly designed active learning can cause overfitting— e.g., always selecting the least uncertain samples may degrade performance, sometimes performing worse than random sampling. Our FI-based approach mitigates this by encouraging **exploration in the embedding space**, ensuring selected samples remain diverse, and reducing the risk of overfitting to a narrow subset. Additionally, using last layer features—rather than all layers (as in [2])—acts as regularization, preventing over-optimization and improving robustness esp. in the early stage. ### **P5. Hyperparameters** The proposed method is not sensitive to its hyper-parameter choices, and we have extensive ablations studies on all hyper-parameters: - The reward model MLP architecture: ablation study in Appdx.A.4. - Batch size: ablation study in Appdx.A.2 - Different base models: we test three of them in Sec 5.1 ### **P6. Preference data types** The method is general --- the method can be applied as long as the FI can be calculated. ### **P7. Datasets** Resource constraints limited our experiments to the datasets we reported. While training reward models are cheap, benchmarking across 5 seeds x 32 hyperparameter settings x 3 LLMs x 8 methods led to a **3840× cost increase**, exceeding 2000 USD per dataset on cloud platforms. Despite this, our experiments provide strong empirical support for the method and we can draw statistically significant conclusions from those extensive empirical results. That being said, should there be important insights we need to draw from new experiments, we are more than happy to add them. ### **P8. Initial RM Performance** - The initial performance can be seen in e.g. Fig.3 at iteration 125 (for an initial model trained with 125 random samples). Using our strategy can lead to 5-20x higher sample efficiency as compared to the other baselines and more than 2x higher asymptotic performance. ### **P9. Time consumption** - The optimization problem for selecting data based on the linear approximation of FI requires a single forward pass to obtain the last-layer embedding and a backward pass to compute the FI, which is fast. Working on the embedding space, the experiment shown in Fig.3 took less than 10 minutes to finish for our method and baselines other than [2] and 2 hours for [2] on a CPU machine. --- **We thank the reviewer again for their effort in improving our work. If there should be any remaining concerns or questions, we are keen to do our utmost to address them. Please kindly consider increasing their rating if we address their concerns.** _**References**_ [1] Active preference learning for large language models [2] Batchbald: Efficient and diverse batch acquisition for deep Bayesian active learning --- Rebuttal Comment 1.1: Comment: Thank you for your response. I apologize for the missing references in the previous comments. I am curious about whether this method can be adapted to other types of reward models beyond BT models such as [1]. Can you provide some discussion? [1] Quantile Regression for Distributional Reward Models in RLHF. https://arxiv.org/abs/2409.10164 --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their clarification! [1] is indeed a promising model for learning human reward beyond point estimates. Yes, our strategy can be adopted with some modifications — the strategy can be seen as minimizing the _determinant of asymptotic covariance of the last layer weights_. There are general methods to calculate the covariance for M-estimators where the parameters are estimated by minimizing a loss function with some regularity. The quantile regression part in [1] falls in this general category [2]. The target is a similarly weighted version of the embedding difference. __Following the notation of [1], since all data points are equally weighted in eq (2) of [1] and assuming a centered residual the target asymptotic precision (inverse of the covariance) is proportional to $XX^\top/\tau(1-\tau)$, and we can use its determinant.__ The full general form of this covariance matrix and derivation is a bit involving and can be found in Chapter 3 of [2] using influence functions. For a fixed $\tau$ this is quite similar to det(XtX) we tested but the data and workflow for the reward model would be quite different so a different test is needed to tell. We have added discussions in our revision, and we believe it provides further insight and demonstrates the generality of our method. --- Please let us know if further clarification is needed. Should there be any remaining questions or suggestions, we are more than happy to further address them! ---- _**References**_ [1] Quantile Regression for Distributional Reward Models in RLHF. https://arxiv.org/abs/2409.10164 [2] Koenker, R., 2005. Quantile regression (Vol. 38). Cambridge University Press.
Summary: The paper proposes active learning methods for reward modeling. These active learning methods work as follows: 1. For a large set of prompts, use LMs to generate responses for comparison. 2. Form these generated prompt-response pairs into tuples for either in-prompt comparisons (prompt, response 1, response 2) or cross-prompt comparisons (prompt 1, response 1, prompt 2, response 2). 3. Select a subset of tuples to annotate, using the current reward model. 4. Use the selected subset of tuples to update the reward model. 5. Return to step 1 and iterate. The key problem is step 3: how to select the subset of comparisons from a large candidate pool. For this selection, the authors consider a few choices of scoring functions (Section 3.1), which assign a score to each subset of comparison tuples. Empirically, the scoring rule based on D-optimality works best, according to the authors' evaluation. Claims And Evidence: The authors' main argument for active learning of reward models is that it reduces training costs. However, I find the evidence lacking. The reason I think so is that the current study focuses on reducing the cost of annotating examples, but at the expense of introducing significantly higher costs in the language model (LM) response sampling phase (Figure 2). Indeed, to reduce the overall training cost, a trade-off between **the cost of generating responses** and **the cost of annotating them** must be considered. While active learning methods may reduce the number of training examples (and thereby the annotations of these examples), it introduce the extra cost of sampling, too. Specifically, the active learning algorithm (Figure 2) requires sampling **a large amount of responses** from language models before discarding most of them and narrowing down to a smaller subset to train the reward model. Furthermore, this sampling occurs over many iterations, making this process even more expensive. The repetitive sampling process, as well as the repetitive reward model training process, is unique to the proposed active learning approach and not present in standard training. It seems that the authors did not account for this part of the training cost. What exacerbates the cost situation for the proposed sampling-intensive active learning is that increasing the LM size could lead to higher sampling costs, even though the cost of preference annotation by human experts remains constant. For this reason, I believe that the cost advantage of the authors' sampling-based active learning approach over standard reward modeling will at best narrow---and more likely reverse---as we scale up the model size to a point where the over-sampling of responses becomes costly enough to negate the benefit of annotating fewer responses. Based on my reasoning above, I believe a comprehensive cost-performance trade-off comparison between various active learning methods and a standard reward modeling baseline (without active learning or sampling) is needed to validate the argument that active learning reduces training costs. Currently, such cost-performance trade-off experiment is lacking. Methods And Evaluation Criteria: See "Claims And Evidence". Theoretical Claims: Not applicable, as the paper does not contain theoretical claims or proofs. Experimental Designs Or Analyses: See "Claims And Evidence". Supplementary Material: I have not reviewed the supplemental. Relation To Broader Scientific Literature: This paper aims to train a reward model using active learning. As the reward model is a binary classifier, the proposed approach is not specifically tailored to natural language processing. The literature on active learning for (binary) classification is most relevant, which is very broad. Essential References Not Discussed: Not applicable. Other Strengths And Weaknesses: The main weakness, as detailed in "Claims And Evidence", is that it is unclear how much active learning can lower the overall training cost of reward model. Cost--performance tradeoff experiments are needed for the argument of reducing the training cost to be convincing. Other Comments Or Suggestions: A few typos in the paper: line 56: $r \in \mathbb{R}^D \to \mathbb{R}$ should be $r: \mathbb{R}^D \to \mathbb{R}$. line 4 of Algorithm 1: If I am not mistaken, the argmax should be over $C \subset \mathcal{P}\_{s}$ instead of $C \subset \mathcal{P}\_{s-1}$. Questions For Authors: I am wondering whether the following approaches are valid for the cost-performance tradeoff comparison: 1. Assign a constant value to represent the cost for each preference annotation. 2. Measure the FLOPS required for training the active learning procedure (Figure 2), which should include both the reward model training cost and LLM sampling cost. 3. Compare the overall cost (annotation cost and training FLOPS) of active learning with that of standard reward model training methods. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: **We thank the reviewer for investing their time in reviewing our work, and providing insightful suggestions for improving our paper. We have carefully considered each point of feedback and provided our point-by-point responses below.** --- ### **P1. Response to the main concern: cost of oversampling** _We thank the reviewer for raising the question on sampling cost, and we are reminded there could be misunderstandings about the generation cost in the proposed procedure._ **1. Sampling Cost** While our algorithm requires a **large number of comparisons** to choose from, these comparisons are generated **combinatorially from a small number of responses**, this is because we can reuse a single response in multiple comparisons. For instance, generating $10$ responses rather than $2$ responses as in non-active random sampling will lead to $ 45 $ times more comparisons. Moreover, in the cross-prompt comparisons setup, $10$ responses per prompt on $500$ prompts lead to $12M$ potential comparisons. - [**Spend 2.5 more USD in generation**] To compare the cost of generation and annotation. With our setup of 500 prompts, using the DeepSeek API at off-peak hours, generating 2 responses would cost 0.7 USD (2M input tokens and 2M output tokens); and generating 10 responses would cost 2.5 more USD. - [**Saving 1000 USD with efficient annotation**] In our experiments, we find using 10 responses per prompt can significantly improve the annotation efficiency (by over 20x more efficiently as compared to only generating 2 responses). Annotating $20000$ randomly selected preference data will cost 160 labor hours (assuming 125 annotations per hour) and will cost more than 1000 USD according to the 7.5 USD minimum wage in the US. With our method, the same reward modeling performance can be achieved with $1000$ annotations, which cost $60$ USD (cf. Fig.3). We will be more clear in the revision on this aspect. **2. Training Cost** As for the cost of re-training, we would note the fact that training reward models can be much more computationally efficient than fine-tuning language models [1-4]. In our work, we follow the embedding-based reward modeling setups and the computational cost is negligible as compared to the cost of LLM generation (i.e., **cost of RM training << cost of generation << cost of annotation**) In terms of FLOPs, training a BT reward model with $2048$-dim input on $1000$ samples and optimizing for $10$ epochs correspond to $1e11$ FLOPs. Generating 2 responses (1000-token-long) on those $1000$ samples using a 7B LLM needs $1e13$ FLOPs. The expense of finishing those computations on prevailing cloud service platforms costs less than **$0.01$ USD** On the other hand, hiring annotators for annotation is much more expensive. Annotating preference over $1000$ samples will induce a much higher cost. - The number of candidate prompts and responses per prompt can be tuned to accommodate different budget constraints. With more candidates, it is more likely to get more informative comparisons (c.f. Fig. 18-19 in Appendix A.5) - We also want to highlight that in our experiments, Fisher Information-based active training can achieve better asymptomatic performance compared to other methods for training, especially the most widely adopted random sampling approach. - The reward models can be updated more frequently than the LLMs. This is why our work focuses on active reward modeling rather than a full active alignment workflow (i.e., including both reward model training and LLM fine-tuning). As we have analyzed above, the cost of training reward models is significantly lower than the cost of response generation and collecting preference annotations. On the other hand, from the practical perspective, active (and more frequently updated) reward models can quickly adapt to user preferences and improve user experiences (c.f., Fig.3). ### **P2. Typos** We thank the reviewer for spotting them. We have fixed them in our revision. --- **Once again, we thank our reviewer for their effort in improving our work. If there should be any remaining concerns or questions, we are keen to do our utmost to address them. Please kindly consider increasing the rating if their concerns are well addressed.** --- _**References**_ [1] Gao, Leo, John Schulman, and Jacob Hilton. "Scaling laws for reward model overoptimization." International Conference on Machine Learning. PMLR, 2023. [2] Go, Dongyoung, et al. "Compositional preference models for aligning LMs." The Twelfth International Conference on Learning Representations. 2024. [3] Sun, Hao, Yunyi Shen, and Jean-Francois Ton. "Rethinking reward modeling in preference-based large language model alignment." The Thirteenth International Conference on Learning Representations. 2025. [4] Barreto, André, et al. "Capturing Individual Human Preferences with Reward Features." arXiv preprint arXiv:2503.17338 (2025). --- Rebuttal Comment 1.1: Comment: Thank you for your response. The back-of-the-envelope computation is very helpful. However, could you clarify why the DeepSeek API was used as a metric for computing the sampling cost? Your submission didn't mention DeepSeek. My understanding is that your paper used Gemma2b, Gemma7b, and LLaMA3-8b as the models that generate responses. Is this understanding correct? >With our method, the same reward modeling performance can be achieved with 1000 annotations, which cost 60 USD (cf. Fig.3). Could you walk me through the computation here? Does 1,000 annotations mean 1,000 responses or 10 * 1000 responses? Suppose it is the former (1,000 responses), then I suppose the cost via the DeepSeek API is 1000 * (0.7 * 2.5) = 320 and not 60. Where is my mistake here? > As for the cost of re-training, we would note the fact that training reward models can be much more computationally efficient than fine-tuning language models. Upon re-reading the paper, I understand that the authors use MLP-based reward models with few (like 3) hidden layers throughout the evaluation. Is this understanding correct? If so, provided that the reward models are shallow MLPs, I agree the cost of re-training them is cheap in the authors' setting. However, in realistic post-training settings, reward models are usually language models with a classifier head and not MLPs; see, e.g., sequence classifier reward models in RewardBench (https://github.com/allenai/reward-bench). The repetitive training approach could cause additional computational overhead when applied to these larger models. Furthermore, is there a reason why the authors chose to use an MLP-based reward model? To me, using an MLP as a reward model is quite unconventional choice. As even the arguably the first RLHF paper [Stiennon et al, (2020)](https://arxiv.org/abs/2009.01325) uses LM-based reward models. In practice, it seems unlikely that one would be willing to sample thousands of responses from language model APIs for active learning, yet be unwilling to use an LM-based reward model to improve accuracy. In the paper, it was briefly mentioned that "To separate representation learning from reward modeling, we train our reward model using joint embeddings of prompts and responses." But as long as the reward model is based on the same checkpoint of LM, "representation learning" is not really a confounder that influences the reward modeling accuracy, right? Plus, this is the more realistic setting that practitioners use (see, e.g., the RewardBench paper I mentioned earlier).
Summary: The paper presents an evaluation of different methods to determine which samples in a dataset of (prompt, response one, response two) triplets should receive a preference label and be used to train a LLM reward model using the Bradley-Terry model. The authors propose to use a modification of D-optimality on the embedding space of the reward model to determine which samples to label and train with. Six other strategies to select samples are compared against with a detailed description of each method to select the training samples. The difference between the methods is examined with a simple, 2-dimensional toy dataset and a the Helpful and Harmless data from Anthropic. For the Helpful and Harmless 3 different LLMs are compared to understand how the LLM impacts the data selected with each strategy. The experiments and results rely on 1 - Spearman correlation and Best of N reward. Performance is evaluated both within and across prompts, and an experiment is run to look at the how the number of samples impacts reward model performance. The learned reward models are compared to Gold Standard reward models for the HH Anthropic dataset. The experiments demonstrate the benefits of D-Optimality over the other methods. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. However, the paper would be stronger if at least one other dataset (e.g. OpenAssistant) was evaluated as well. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem at hand. Theoretical Claims: There are not proofs to assess Experimental Designs Or Analyses: Yes, I checked the soundness/validity of the experimental design and analysis. I checked the set up of the 2-dimensional toy experiment and of the helpfulness and harmless experiments. Although many important details are provided to understand the experimental set up, there are some gaps. Please see the questions below. The impact of the number of samples is assessed as well as the impact of annotation size for the D-optimality sample selection strategies. As there is evidence that reward modeling abilities does not always directly translate to final policy performance, an evaluation of the quality of policy the reward model is able to train is missing. This can be done either with DPO or PPO. It would be great to supplement the comparison of samples selected by the different sample selection strategies for the 2-dimensional toy domain experiment with an evaluation of the quality of the learned distribution to draw the connection between the samples selected and what the reward model learns. Supplementary Material: yes, all parts. Relation To Broader Scientific Literature: The paper is well situated within the broader scientific literature on preference sample selection. Essential References Not Discussed: The references discussed appears sufficient Other Strengths And Weaknesses: The paper is well written and easy to follow with key take aways clearly stated along with key aspects of evaluation, such as success criteria clearly state. Other Comments Or Suggestions: 1. Please more clearly highlight that D-Optimality is the method you are "proposing" in this paper. It takes a bit of re-reading to understand that this part is a main contribution. 2. It is difficult to interpret and draw conclusions from Figure 1. Including a table in the appendix with numbers detailing the attributes mentioned in the discussion would be helpful. For example, the mean and standard deviation in number of connections between each point, the difference in rewards, and a measure of sample diversity. Questions For Authors: 1. What is I^s in the active learning section where the datasets are defined? 2. What is meant by "joint embeddings of prompts and responses" on line 326? Please expand upon this. 3. For figures 3, 4, and 5, what are the error bars over? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We thank our reviewer for their encouraging feedback. To respond to the point raised by this reviewer, below, please find our answers to each of the questions.** --- ### **Q1. Translating performance of reward model not to alignment** - In our experiments, we evaluated the effectiveness of different reward models using Best-of-N (BoN) sampling following [1,2] and the Spearman Ranking correlation following [3]. This choice was driven by three key considerations: 1. **Performance**: Empirical studies show BoN achieves better performance than PPO [1,2,4,5] 2. **Stability and Reduced Engineering Overhead**: BoN requires no hyperparameter tuning and is more stable than PPO, leading to more consistent and interpretable results. [1,6,7] 3. **Computational Efficiency and Reproducibility**: BoN’s reusability across N generations during test time makes it more computationally efficient compared to policy gradient optimizations. In contrast, using PPO or DPO for our experimental setups (in total 3840 = 5 random seeds x 3 LLMs x 8 methods x 2 annotation strategies x 4 batch sizes x 2 network sizes x 2 candidate sizes) would be computationally prohibitive since each setup requires distinct LLM fine-tuning [8]. - It is indeed true that a good reward model is not sufficient for good performance. However, it is probably necessary for a good final policy performance and the best-of-n sampling we tested can be seen as a best-case scenario [1] that can be treated as an approximated upper bound of final performance after RL that does not involve costly DPO or PPO. ### **Q2. Adding toy example performance figure** We thank our reviewer for the great idea! We do have a similar performance figure and we will add it in the revision of the paper. ### **Q3. More clearly highlight that D-Optimality** We thank our reviewer for their suggestion. We agree highlighting more on the D-Optimality in the introduction, and method in our revision can further enhance its clarity. ### **Q4. Conclusions from Figure 1** We thank our reviewer for the great suggestion. We will add a table detailing the number of connections and sample diversity, e.g., the variance of pairs in the original space and the last layer of the neural net ### **Q5. What is $I^s$ in the active learning section where the datasets are defined?** The number of unannotated prompt-response pairs. We have clarified the notations accordingly in our revision. ### **Q6. expand upon "joint embeddings of prompts and responses" on line 326** By joint embeddings, we were referring to the embeddings of prompt-response combinations. ### **Q7. For figures 3, 4, and 5, what are the error bars over?** In our experiments, all error bars are generated with **5 repeated runs with different random seeds** to show the statistical significance of performance differences. --- **We thank the reviewer again for their effort in improving our work. If there should be any remaining concerns or questions, we are keen to do our utmost to address them.** --- _**References**_ [1] Gui, Lin, Cristina Gârbacea, and Victor Veitch. "Bonbon alignment for large language models and the sweetness of best-of-n sampling." arXiv preprint arXiv:2406.00832 (2024). [2] Gao, Leo, John Schulman, and Jacob Hilton. "Scaling laws for reward model overoptimization." International Conference on Machine Learning. PMLR, 2023. [3] Sun, Hao, Yunyi Shen, and Jean-Francois Ton. "Rethinking reward modeling in preference-based large language model alignment." The Thirteenth International Conference on Learning Representations. 2025. [4] Dong, Hanze, et al. "Raft: Reward ranked finetuning for generative foundation model alignment." arXiv preprint arXiv:2304.06767 (2023). [5] Yuan, Zheng, et al. "Rrhf: Rank responses to align language models with human feedback without tears." arXiv preprint arXiv:2304.05302 (2023). [6] Ivison, Hamish, et al. "Unpacking dpo and ppo: Disentangling best practices for learning from preference feedback." Advances in neural information processing systems 37 (2024): 36602-36633. [7] Xu, Shusheng, et al. "Is dpo superior to ppo for llm alignment? a comprehensive study." arXiv preprint arXiv:2404.10719 (2024). [8] Stiennon, Nisan, et al. "Learning to summarize with human feedback." Advances in neural information processing systems 33 (2020): 3008-3021. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I will be leaving recommendation as accept.
Summary: This paper investigates strategies to leverage adaptive preference labeling for reward modeling in LLM alignment. The authors propose an Active Reward Modeling ARM framework that uses Fisher information to score and select informative preference comparisons to improve annotation efficiency; then, they benchmark several active learning and experimental design-based strategies across multiple models and datasets. They report gains in annotation efficiency and reward model performance compared to random sampling and other baseline methods. Claims And Evidence: The authors' claims regarding the efficiency of D-optimal and past-aware D-optimal methods are supported by experimental results, but the paper lacks theoretical analysis to explain why these methods outperform others in reward modeling and how they impact the process. Methods And Evaluation Criteria: Generally appropriate. Theoretical Claims: For assumptions, I notice two potential issues: i. The paper assumes that Fisher information computed over the last layer of the reward model sufficiently captures the informativeness of preference comparisons. But this may be an overly idealized assumption. Actually, deep reward models exhibit complex, non-linear behaviors. If we focus only on the last layer, we may overlook critical interactions and uncertainties in earlier layers, and this could limit the robustness of the active selection strategy. ii. The paper assumes that reward model improvement is directly aligned with selecting samples that maximize Fisher information. However, this does not guarantee that the selected comparisons will improve generalization to downstream tasks. In particular, maximizing Fisher information might bias the reward model toward "hard" comparisons that are not representative of typical user preferences. I worry that it may lead to a misaligned reward signal. Experimental Designs Or Analyses: The experimental design relies on Fisher information-based selection during the preference data sampling process, with the assumption that it consistently leads to better reward model performance. The paper does not address the potential biases introduced by focusing on "informative" but possibly unrepresentative comparisons. It would be valuable to discuss the trade-off between focusing on hard-to-predict samples and maintaining representativeness of human preferences, and the actual improvements in real-world alignment this strategy offers. Specifically, how does selecting for Fisher information impact the generalizability and alignment quality of the reward model? The comparison of ARM's preference data selection and resulting reward model performance to baselines is not sufficiently explored. The experimental analyses focus on annotation efficiency and ranking accuracy but do not assess downstream alignment impacts or broader generalization. These are limited and do not offer enough evidence to assess ARM's real-world alignment capabilities. Given that reward models are often applied to diverse and complex downstream alignment tasks, and that preference data may not always map directly to better alignment, it would be beneficial to provide a broader set of downstream evaluations or human-in-the-loop assessments. This would offer a more comprehensive view of ARM's effectiveness in practical LLM alignment scenarios. Supplementary Material: Appendix. Relation To Broader Scientific Literature: This framework builds on prior work in active learning, experimental design, and preference modeling, specifically leveraging ideas from Fisher information-based experimental design and Bradley-Terry preference models to improve reward model data efficiency. And this paper compares its Active Reward Modeling framework against several baselines, including random sampling, max reward difference, max entropy, batchBALD, and coreset methods for preference data selection. Essential References Not Discussed: The main limitation of this paper is the lack of discussion on the motivation and theoretical justification of the proposed method, rather than missing references. Other Strengths And Weaknesses: The idea of actively selecting informative preference comparisons for reward modeling is novel. Most existing work focuses on random sampling or heuristic-based selection for preference data, but these approaches cannot efficiently optimize the annotation process to maximize learning efficiency, this could affect the scalability and effectiveness of reward models in LLM alignment. This paper introduces a method that leverages Fisher information-based selection strategies to identify the most informative preference comparisons, as well as significantly improves annotation efficiency and reward model performance with fewer labeled comparisons. Other Comments Or Suggestions: The authors do not clearly explain why Fisher information is the most suitable criterion for preference data selection in the context of reward modeling, compared to other uncertainty or diversity-based approaches. I wonder why selecting "informative" comparisons based on Fisher information would align better with real human preferences, especially when human preference data may be noisy or inconsistent. Could the authors clarify the motivation behind this assumption? Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **We thank our reviewer for their careful and detailed review of our work. We appreciate that several valuable points of feedback were included, and we believe that the updated version of our work will be strengthened by reflecting on these points. We address concerns and questions below.** --- ### **Q1. Can using the last layer overlook complexity and lost robustness** We thank our reviewer for raising the insightful question. An extension using uncertainty of more parameters via Fisher Information (FI) is possible. E.g. [2] used Bayesian uncertainty for all parameters. In our experiments we find our method to be better than [2]. An intermediate approach using uncertainty of more layers than the last could potentially further improve the performance yet is left for future research. ### **Q2. Can actively selected samples lead to better generalization** We agree with our reviewer that evaluating the generalization ability of RMs is necessary. In our work, we evaluated our RMs with holdout data, showing no signs of overfitting (in contrast, the entropy sampling method shows overoptimization). Relating to the previous point, using only the last layer features may be a regularization, preventing over-optimization. This could explain why we outperform [2]. ### **Q3. Downstream performance** - We agree that a good reward model may not lead to improved alignment. Human preferences are complex, and no scalar reward function can fully capture it. While this is a valid concern, addressing it is beyond the scope of this paper. We follow the common RLHF assumption that a reward function serves as a sufficiently useful optimization objective. Ideally, we would test the entire alignment pipeline but such a test is beyond our computational resources. We believe our experimental setup provides sufficient evidence of the method's utility. - To be more concrete, our evaluation setup has - **Operational definition of alignment** We define alignment as having high human reward value. Since the true human reward is not directly accessible, we use a "golden" reward model—trained on a large dataset—as a surrogate. - **Metric of success** An active learning method is successful if best-of-N samples based on this reward model achieve higher rewards with fewer human annotations. We also reported the Spearman correlation of RMs and the golden reward models. ### **Q4. FI Suitability** - Intuitively, this approach balances exploration in representation space, enabling the reward model to predict across a broader range of prompt-response pairs while exploiting uncertain comparisons between hard-to-distinguish pairs. This is preferable to purely diversity-based methods, which may focus on “easy” comparisons, and more effective than purely uncertainty-based methods, which often compare similar pairs, limiting generalization (c.f. Fig.1). - Empirically, our approach outperforms pure diversity methods, e.g. coreset, and pure uncertainty method e.g. entropy sampling [1]. Our method achieves both better sample efficiency and asymptotic performance. ### **Q5. Does FI Align Better with Real Human Preferences?** - This is a valid concern, and we break it down into two parts as FI depends on model: 1) Is Bradley-Terry (BT) a good model for human preference? 2) If we accept BT as a good model can FI-based active learning handle noise and improve sample efficiency? - For 1), this is a valid concern. As George Box said, "all models are wrong, but some are useful." BT is: a) widely adopted, and b) proven to be effective in large-scale applications. Thus, we consider it a useful model to improve sample efficiency. Testing alternative models to account for inconsistencies e.g. transitivity violations is beyond the scope of this paper. - For 2), assuming BT is useful, our experiments demonstrate that FI-based sampling design can yield a better-performing reward model with less data. The intuition behind is discussed in the response to why FI is a good metric. To summarize: since BT is widely adopted and acquiring human preference data is expensive, our goal is to improve sample efficiency by carefully selecting comparisons. --- **We thank the reviewer again for their effort in improving our work. If there should be any remaining concerns or questions, we are keen to do our utmost to address them. Please kindly consider increasing the rating if their concerns are properly addressed.** --- _**References**_ [1] Muldrew et al. "Active preference learning for large language models." ICML’24 [2] Kirsch et al. Batchbald: Efficient and diverse batch acquisition for deep Bayesian active learning. NeurIPS’19
null
null
null
null
null
null
Prediction-Aware Learning in Multi-Agent Systems
Accept (poster)
Summary: The paper considers learning in time-varying multi-player games using prediction-aware algorithms. It begins by observing that prior results quickly become vacuous when there is a large variation between the games, even though the underlying sequence can be entirely predictable. In light of this, the paper proposes a new algorithm that incorporates predictions under the framework of contextual learning, wherein, in each round, the algorithm makes a prediction about the underlying state of nature. When the sequence of states is somewhat predictable. the paper shows that existing results from the static setting can be generalized even when there is substantial variation between the games. Claims And Evidence: All claims made in the paper are clear and sound. Methods And Evaluation Criteria: The paper is mostly theoretical. The experimental evaluation supports the theoretical claims and drives home the main argument of the paper: it shows that existing algorithms, namely OMWU, fail to take into account the predictability in the underlying sequence of states, unlike the proposed algorithm. The sequence of games constructed is sufficiently rich to make a convincing argument--it's not just a toy example. Theoretical Claims: I checked the proofs, and all claims appear to be sound; I did not find any notable issue. Experimental Designs Or Analyses: The experimental design and the interpretation of the results are, as far as I can ascertain, sound. Supplementary Material: I reviewed the supplementary material; it is well-organized and polished, and, as I said above, I did not find any notable issues in the proofs. Relation To Broader Scientific Literature: The contributions of the paper lie mostly in the area of learning in games, and in particular, time-varying games. It addresses some drawbacks of existing results, as I pointed out above. The paper does a good job at placing the contributions accurately in the context of existing work. In particular, it extends the framework of Syrgkanis et al. from the static setting to the time-varying setting. Essential References Not Discussed: I did not identify any essential reference missing. Other Strengths And Weaknesses: On the positive side, the paper makes a clear contribution by addressing an important gap in prior work in the context of time-varying games. The proposed algorithm is natural and very much relevant in practice--as long as the number of states is reasonably small. I can certainly see many possible applications in which the results of the paper can be used. The paper is also very well-written and organized. The key ideas of the paper are explained very clearly. One drawback is that most of the results are not particularly challenging to obtain, and follow mostly by adapting in a straightforward way existing techniques. But I do not believe that this alone is a basis for rejection. Other Comments Or Suggestions: Some minor points: - What is ARIMA in Line 130? I might have missed the definition. - there is a typo in "is an interesting topics for future research" in Line 348. Questions For Authors: The paper mostly follows the techniques of Syrgkanis et al. I wonder whether the authors tried to use some more recent results that, in the static setting, obtain polylog(T) regret bounds. I also did not find any guarantees about POMWU instantiated with Blum-Mansour, which is introduced in the preliminaries; but I might have missed that. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their very relevant remarks. We answer below to the points they raise. > “What is ARIMA in Line 130? I might have missed the definition.” ARIMA refers to Auto Regressive Integrated Moving Average process, a popular process in time series analysis (see, e.g., [3]). We will make this acronym explicit in the revised version of the text. [3] Hamilton, J. D. (2020). Time series analysis. Princeton university press. > “there is a typo in "is an interesting topics for future research" in Line 348.” Thank you for pointing out this typo, we will correct it in the revised version of the text. > “The paper mostly follows the techniques of Syrgkanis et al. I wonder whether the authors tried to use some more recent results that, in the static setting, obtain polylog(T) regret bounds. [...].” We appreciate the reviewer’s insightful question. We did consider leveraging techniques that achieve polylog(T) regret guarantees in the static setting, such as those in [4] and [5]. However, several limitations led us to build upon Syrgkanis’ framework instead. Regarding [4], the proof techniques are particularly intricate, and it remains unclear whether their approach can be extended to the contextual setting. In fact, the complexity of [4]’s analysis is explicitly acknowledged in the abstract of [5]. As for [5], its theoretical framework—based on self-concordant barriers—appears more adaptable to prediction-aware learning. However, it comes with several drawbacks. First, the polylog(T) regret bound in [5] (Corollary 4.5) introduces a dependence on the number of players J and the number of pure actions K through a factor of JK^{5/2}. In contrast, our external and swap regret bounds scales as $\sqrt{J} \ln K$ and $ \sqrt{J} K \ln K$ (Propositions 1 and 6) respectively, which is significantly more favorable. Additionally, [5] is somewhat less general in scope, as its analysis is restricted to swap regret and does not apply to external regret. This limitation arises because their main proof relies on the positivity of swap regret, which is not guaranteed for external regret. Consequently, adopting their approach would prevent us from providing guarantees for coarse correlated equilibria and social welfare, as Roughgarden’s smoothness condition relies on external regret. [4] Daskalakis, C., Fishelson, M., & Golowich, N. (2021). Near-optimal no-regret learning in general games. Advances in Neural Information Processing Systems, 34, 27604-27616. [5] Anagnostides, I., Farina, G., Kroer, C., Lee, C. W., Luo, H., & Sandholm, T. (2022). Uncoupled learning dynamics with $ o (\log t) $ swap regret in multiplayer games. Advances in Neural Information Processing Systems, 35, 3292-3304. > " [...] I also did not find any guarantees about POMWU instantiated with Blum-Mansour, which is introduced in the preliminaries; but I might have missed that." The Blum-Mansour reduction from external to swap regret stated in Proposition 1 is used to establish Corollary 2-(ii). We will highlight this more clearly in the text of the revised version.
Summary: This paper considers time-varying games where better guarantees (wrt (swap) regret, equilibrium concepts and social welfare) can be achieved when the agents can predict/estimate the time-varying utilities. For a J-player game with utility $c^{j}(w,Z)$ (where $Z$ captures the time-variance), the players use a variant of optimistic exponential weights method to predict $\hat Z_t$ and update the iterates $w_t$. The analysis is relying on the assumption that $Z$ is from a finite set, which limits the variations of the time-varying games. The authors provide guarantees for the regret of each individual player (Prop. 4) which is of similar order as the bound for optimistic Hedge plus a multiplicative factor of $\bar L_T$ (total number of mispredictions of $Z$). From this a bound on the social welfare is derived (Prop 5) and guarantees for the approximation quality of an $\epsilon$-CCE and $\epsilon$-CE. Robustness guarantees wrt the incentive of deviating from the proposed strategy are provided in Prop. 7. Furthermore, the theoretical findings are supplemented with illustrating experiments. Claims And Evidence: Yes. Methods And Evaluation Criteria: This is theoretical work. The illustrating experiments make sense. Theoretical Claims: Yes, mostly (for anything that was not clear from the claims). To the best of my understanding, all theoretical results are correct. Experimental Designs Or Analyses: This is theoretical work. The illustrating experiments make sense. Supplementary Material: Parts of it (see above). Relation To Broader Scientific Literature: The work extends the research on time-varying games in the case where the time-variance can be estimated. It relates to the research on time-varying games on the one hand and ideas from contextual bandits (however, full feedback is considered here!) on the other hand. Essential References Not Discussed: The authors should consider adding the reference 'Multi-agent online learning in time-varying games', Benoit Duvocelle, Panayotis Mertikopoulos, Mathias Staudigl, Dries Vermeulen, in addition to the reference to Zhang et al. Other Strengths And Weaknesses: The paper is clearly written, nice to read and provides an interesting extension to existing results. Other Comments Or Suggestions: none. Questions For Authors: none. Ethical Review Concerns: none. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s positive feedback and the highly relevant reference they have suggested. We will incorporate it into our literature review in the revised version.
Summary: This paper discusses the problem of no-regret learning in general time-varying games with predictions. The authors argue that current regrets, defined as a function of variations in the payoff matrix and variations in the Nash equilibria, become vacuous even in simple examples like the one provided in Example 1. In this way, the paper motivates an alternative viewpoint toward prediction-aware learning by considering prediction oracles that account for changes in nature (parameterized in some way). Building on this, the authors define the notions of external and swap contextual regret and generalize the constructions of [Blum and Mansour, 2007], RVU bounds, CCE convergence, and social welfare guarantees of [Syrgkanis et al., 2015] for this new setting. Claims And Evidence: Yes, the theoretical claims are well supported; the claims are fairly standard, and there is no magic. Methods And Evaluation Criteria: The authors evaluate the proposed algorithm on the Sioux Falls routing problem from [LeBlanc et al. (1975)] to assess its performance under mispredicted contexts. Theoretical Claims: Even though I skimmed, I don’t think any major issues exist since the theoretical claims are pretty standard. Experimental Designs Or Analyses: The routing problem used is quite interesting, and the results seem valid. Supplementary Material: I briefly skimmed the proofs. Relation To Broader Scientific Literature: Learning in time-changing games with supervised learning is an interesting and fundamental problem with many real-world applications. Essential References Not Discussed: None that I can think of. Other Strengths And Weaknesses: The significance of the results is questionable. Once the problem is well-defined in terms of costs, etc., the derivations are straightforward and follow standard techniques in the literature. The authors start the motivation interestingly with an example of predictions that are regression-based, but the focus of the paper is solely on online classification and Littlestone dimension, postponing the more challenging and interesting problem to future work. This work, despite strongly motivated examples, seems incomplete. Other Comments Or Suggestions: I cannot suggest acceptance of this work at this stage since it seems too incremental. However, I am willing to reconsider if the prediction-aware rates under regression oracles yielding regret rates matching those of [Syrgkanis et al., 2015], with perfect predictions as a special case, are included in the results. Questions For Authors: No questions. The paper is well-written and straightforward. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We reply to the points they raise below. > “The significance of the results is questionable. Once the problem is well-defined in terms of costs, etc., the derivations are straightforward and follow standard techniques in the literature.” We respectfully disagree with the characterization of our work as incremental. Our contributions go beyond standard derivations in several important ways. First, we view the formalization of the problem — including the explicit definition of costs — as a significant contribution in itself. In doing so, we introduce several novel concepts, such as contextual swap regret and contextual correlated equilibrium, which represent meaningful advances in the study of contextual multi-player decision-making. Second, the derivation of our results are not as straightforward as they may seem at first sight. Beyond handling the inherent complexity of contextual decision-making, we must also account for the intricate dynamics introduced by multiple players. In particular, players may predict different contexts at each timestep, preventing a direct application of Syrgkanis’ analysis on batches of timesteps with the same revealed context. To address this challenge, we develop a novel regret decomposition in the proof of proposition 4, which is essential for our analysis and enables us to reach our conclusions. > “The authors start the motivation interestingly with an example of predictions that are regression-based [...]” We believe our motivating example aligns with the classification setting, as it involves only two distinct payoff matrices (one for even timesteps, one for odd timesteps), which can be mapped to a {0,1} context set. > “[...] postponing the more challenging and interesting problem to future work. This work, despite strongly motivated examples, seems incomplete.” While many real-world problems can be modeled with a finite number of contexts, we acknowledge the reviewer’s point that extending our framework to regression is both valuable and important. However, such an extension would require an entirely different set of analytical tools and falls beyond the scope of this paper. --- Rebuttal Comment 1.1: Comment: Thanks a lot for your detailed response. While I acknowledge that this problem is interesting, my concern regarding the restrictiveness of the online classification settings remains unaddressed. The motivation in Example 1 of the manuscript that criticizes the measures of variations of $$ P\_T = \min\_{\mathcal{E}\_1 \times \ldots \times \mathcal{E}\_T} \sum\_{t \in [T]} \left( \|\| x\_t^\star - x\_{t-1}^\star \| \|\_1 + \|\| y\_t^\star - y\_{t-1}^\star \|\|\_1 \right), $$ and $$ V_T = \sum_{t \in [T]} \|\| A_t - A_{t-1} \|\|_\infty^2, $$ used in regret bounds by (Zhang et al., 2022) and (Anagnostides et al., 2024) is very strong, but the results of this paper being only limited to cases when variations are limited to a finite number of classes instead of a regression problem is a little bit disappointing and incremental. For this reason, I am not changing my decision, but I am open to reconsidering significantly (as mentioned already in my review) if additional results, matching those of [Syrgkanis et al., 2015], with perfect predictions as a special case, are included. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their response. Even though many applications we envision feature categorical contexts, we would be happy to explore the suggested extension. However, we would like to reiterate that addressing it would require an entirely different set of tools and analysis; and thus warrants a separate study. Below, we outline the key reasons for this conclusion. Assume that $\mathcal{Z}$ is now a compact subset of $\mathbb{R}^d$. Two approaches are possible. First, we could discretize $\mathcal{Z}$, project any context $z\in\mathcal{Z}$ onto the closest node of the resulting mesh, and then apply our analysis. However, this would lead to an exponential dependence on the dimension. Second, we could work directly with a continuum of contexts. However, without restrictions on the policy class, it is always possible to construct an instance in which our notions of contextual and swap regrets grow linearly with $T$. To address this, we would need to either introduce new regret concepts or consider specific policy classes whose complexity can be controlled, such as linear policies. In both cases, this would involve intricate analysis requiring substantial time and effort, making it difficult to incorporate into the current paper, which already introduces numerous concepts and results. Finally, we note that the reviewer acknowledges that our paper is well-motivated, addresses a novel problem, and provides an effective solution both empirically and theoretically under our finite context set assumption. While we recognize that the suggested extension is both interesting and promising, we hope we have convinced the reviewer that it falls outside the scope of the present study.
Summary: This paper introduces a prediction-aware learning framework for time-varying games, where agents can forecast future payoffs and adapt their strategies accordingly. The authors propose the POWMU algorithm, a contextual extension of the optimistic Multiplicative Weight Update algorithm, and provide theoretical guarantees on social welfare and convergence to equilibrium. The framework achieves performance comparable to static settings when prediction errors are bounded Claims And Evidence: 1. Introduction of a prediction-aware learning framework for time-varying games 2. Development of the POWMU algorithm for leveraging predictions about the state of nature 3. Theoretical guarantees on social welfare and convergence to equilibrium are the evidences for the paper. 4. Empirical demonstration of POWMU's effectiveness in a traffic routing experiment also varifies the claim. Methods And Evaluation Criteria: 1.The major method and the evaluation criteria used is regret, and the gurantees on the social welfare. Theoretical Claims: The proofs are well written and I have skimmed through the proofs, though not verified them line by line, but they appear to be correct. Experimental Designs Or Analyses: Empirical demonstration of POWMU's effectiveness in a traffic routing experiment is shown, however, this alone might not be sufficient, I would like to see some more experiments. Supplementary Material: NA Relation To Broader Scientific Literature: The problem is well posed and novel, it will be a good addition to the missing literature. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strenghts: 1. Addresses a gap in existing literature by incorporating agents' predictive capabilities 2. Provides theoretical guarantees that match non-contextual guarantees for static games under certain conditions 3. Demonstrates robustness in adversarial settings Weaknesses: 1. Assumes bounded prediction errors, which may not always be realistic 2. Limited empirical evaluation (only one traffic routing experiment) 3. Complexity of the proposed algorithm might not be practical to implement. Other Comments Or Suggestions: NA Questions For Authors: 1. How does the performance of POWMU compare to other state-of-the-art algorithms in time-varying games? 2. Can the framework be extended to handle partial information or bandit feedback settings? 3. How sensitive is the algorithm to the quality of predictions, and what happens when prediction errors are not bounded? 4. Are there any specific real-world applications where this approach could provide significant improvements over existing methods? 5. How does the complexity of the proposed algorithm handled? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We respond below to the weaknesses they indicate, and reply to their questions. > “The paper assumes bounded prediction errors, which may not always be realistic.” We emphasize that our results hold in full generality, without requiring a bounded number of mispredictions. The case where $L_T​=O(1)$ is presented solely for illustrative purposes, highlighting the connection with the well-known static guarantees from Syrgkanis (2015). However, this is not an assumption underlying our analysis. > “Limited empirical evaluation [...]” We thought that our traffic routing experiment was sufficient to demonstrate the empirical effectiveness of POWMU, which seems to be an opinion shared by other reviewers. However, we are open to adding another experiment if the reviewer considers it helpful. An interesting case would be optimal trading strategy on a financial market, where thehe actual price of the asset would depend (for instance linearly) on the actions of the traders. Before making orders on the market, the trader would forecast the future prices of the asset based on public information. We could work with the historical financial data and contextual information from [3] to run the experiment. [3] Wang et al., 2023. Robust Contextual Portfolio Optimization with Gaussian Mixture Models. > “Complexity of the proposed algorithm[...].” If the reviewer refers to the algorithm's design complexity, we respectfully emphasize that POWMU is quite straightforward to implement. Indeed, the method merely involves maintaining a separate instance of OMWU for each context and updating these instances based on feedback from nature. If the reviewer's concern pertains to computational or memory complexity, we also note that POWMU is not particularly demanding in this respect. The algorithm primarily requires storing vectors and matrices and performing only basic operations, such as element-wise multiplication, matrix multiplication, and exponentiation—all of which are computationally efficient. > “How does the performance of POWMU compare to other state-of-the-art algorithms in time-varying games?” The most recent studies on time-varying games [1, 2] primarily use OMWU as their main algorithm. Figure 2 in our paper compares the regret incurred by OMWU and POWMU in our experiments, showing that POWMU significantly outperforms OMWU. This advantage arises because POWMU leverages context predictions, whereas OMWU relies solely on the previous round’s information to determine its strategy. When the game dynamics change gradually, OMWU can perform well, as suggested by the regret bounds in [1] and [2]. However, in highly unstable payoff environments, our approach achieves substantially better performance, as it selectively incorporates information from past rounds with similar contexts. [1] Zhang et al., 2022. No-regret learning in time-varying zero-sum games. [2] Anagnostides et al. 2023. On the convergence of no-regret learning dynamics in time-varying games. > Can the framework be extended to handle partial information or bandit feedback settings? We believe extending prediction-aware learning to partial information or bandit feedback is a promising yet challenging direction. A potential approach could involve adapting the Low Approximate Regret (LAR) concept from [3] to the contextual setting, building upon standard bandit algorithms to create a bandit variant of POWMU. However, working with the LAR property is inherently more complex than handling RVU, making such an adaptation non-trivial. Although this extension lies beyond the scope of the current paper, we recognize it as an important avenue for future research.. > How sensitive is the algorithm to the quality of predictions,[...] Propositions 5 and 6 explicitly characterize the dependence of the social and individual regret bounds on prediction errors. Specifically, our bound for social welfare grows linearly with $L_T$, while individual regret scales as $\bar{L}_T^{3/4}$. Regarding the boundedness of prediction errors, we kindly refer the reviewer to the first point of our response for further clarification. > "Are there any specific real-world applications where this approach could provide significant improvements over existing methods?" Our framework is broad enough to be applied to various real-world scenarios. Examples include power production and trading, where the state of nature may encompass factors such as wind speed and temperature, which influence renewable energy generation and overall power consumption. Other applications include portfolio management, where agents predict future asset prices, and traffic management. We are currently engaging with a power production firm to explore the application of our method to power generation and trading. > “How does the complexity of the proposed algorithm handled?” We kindly refer the reviewer to the third point of our reply for this question.
null
null
null
null
null
null
Lego Sketch: A Scalable Memory-augmented Neural Network for Sketching Data Streams
Accept (poster)
Summary: The paper presents a new approach to solving the frequency estimation problem on a data stream using small space. The idea is to use a neural network to create a sketch that is tuned to the distribution of the input stream. Experiments show that the average frequency error of the approach is lower than competing randomized sketches, including a recent learning-based approach. Claims And Evidence: The claim that the proposed sketch outperforms existing sketches can be questioned since there is no comparison to the class of counting-based sketches such as Misra-Gries. The authors have a point that randomized linear sketches are usually not compared to deterministic sketches in the literature, and this can be justified in some application scenarios where the power of linear sketches is needed. I think the suggestion to mention deterministic sketches such as MG, and include a discussion of when they are applicable, is acceptable. Methods And Evaluation Criteria: The choice of error measures can be debated since an average is taken over the domain from which stream elements come. This error measure becomes questionable when the size of the domain is much larger than the size of the sketch since small errors on non-occurring elements will tend to dominate. Theoretical Claims: I did not check the proofs in the appendix but have no reason to suspect any errors. Experimental Designs Or Analyses: The experiments seem sound. Supplementary Material: No. Relation To Broader Scientific Literature: While the paper makes a detailed comparison to related, hashing-based sketches, it is known that counting-based sketches are often superior for insertion-only streams, especially for datasets that are skewed and where the distribution over time is relatively stable. Furthermore, these sketches are mergeable. Essential References Not Discussed: I would like to see a comparison to Misra-Gries and related sketches, e.g., the implementation in the DataSketches library (https://arxiv.org/abs/1705.07001). Such algorithms should be superior for large domains since items with no occurrences will alway have an estimate of zero. Other Strengths And Weaknesses: Nothing particular comes to mind. Other Comments Or Suggestions: The comment (page 4) that sketches typically do not consider skew is misleading: As shown by Charikar, Chen, and Farach-Colton CountSketch performs better for skewed distributions. Similar results are known for Count-Min sketch. The use of three hash functions for CS is further supported by https://icml.cc/virtual/2021/poster/9691 Questions For Authors: Would your approach be able to support merging of sketches? This would make it applicable outside of streaming settings. Update after rebuttal: Regarding mergeability I am not sure I managed to convey to the authors what my concern was. If the same models are used to two sketches they are of course mergeable, but in general you may want to use different models for different parts of a dataset, to exploit that they have different characteristics. However, merging sketches having different models seems difficult. It would be good to have some discussion of this aspect. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback! We have addressed your concerns as below and welcome any further discussion should you have additional questions. ## Comparison to insertion-only streams algorithm like Misra-Gries ### Reply to >...counting-based sketches are often superior for insertion-only streams... Counter-based methods (e.g., Misra-Gries) and sketch-based methods (e.g., Count-Min Sketch, Lego Sketch) represent two distinct lines of research in streaming algorithms, as classified by Cormode and Hadjieleftheriou in their foundational work "Finding Frequent Items in Data Streams" (VLDB 2008). So, we respectfully argue that directly comparing sketch-based methods (e.g., Count-Min Sketch, Lego Sketch) with counter-based methods (e.g., Misra-Gries Algorithm) is inappropriate, due to their inherently different design goals: counter-based methods identify frequent items in insertion-only streams with minimal overhead, while sketch-based methods focus on probabilistic frequency estimation for dynamic streams (supporting deletions/weights). This methodological separation is well-recognized in the literature (see papers in reference for details), where sketch-based studies explicitly avoid benchmarking against counter-based methods, due to their fundamentally distinct problem formulations and guarantees. The key differences between the two methods are summarized in the table below, based on Cormode’s paper. |**Aspect**|**Counter-Based Methods**| **Sketch-Based Methods**| |-|-|-| |**Stream Type**|Insertion-only streams.| Dynamic streams (insertions, deletions, weights). | |**Primary Use Case**| Identify frequent items (heavy hitters).| Estimate frequency of any item (supports queries). | |**Data Structure**| Explicit counters for tracked items.| Compact 2D array with hash-mapped counters.| |**Handles Deletions**|No.| Yes.| |**Supports Weights**|Limited.| Yes (positive/negative weights).| |**Flexibility**|Optimized for tracking frequent items.|Extendable to quantiles, entropy, inner products.| |**Examples**| Misra-Gries, SpaceSaving, LossyCounting.|Count-Min Sketch, CountSketch.| |**Strengths** | High precision, low space, fast updates. |Versatile, handles deletions, broader applications.| |**Weaknesses**|Limited to insertions. |Higher space/time costs;| |**Common Error Metric**|ARE (Average Relative Error), AAE (Average Relative Error) | ARE, AAE | ## Methods And Evaluation Criteria: ### Reply to > ... This error measure becomes questionable ... since small errors on non-occurring elements will tend to dominate. We respectfully argue that AAE(Average Absolute Error) and ARE(Average Relative Error) have been well-established standards since the early 2000s (see papers in reference). To ensure fair evaluation and comparison, we adhere to these established practices. Moreover, non-occurring elements are often excluded in prior sketch-based literature for two reasons: 1) They lack true frequency, making ARE computation infeasible; 2) Hash collisions introduce similar absolute errors for both occurring and non-occurring elements, leading to minimal impact on AAE. |CM Sketch |Syn stream (skewness$ \alpha=0.6$) |Syn stream (skewness$\alpha=0.8$)|Syn stream (skewness$\alpha=1.0$)|Syn stream (skewness$\alpha=1.2$)| Syn stream (skewness$\alpha=1.4$)| |-|-|-|-|-|-| |AAE 10K occurring elements|2.98|1.87|1.33|1.04| 0.89| |AAE 10K non-occurring elements|3.09|1.88|1.31|1.05| 0.89| |ARE 10K occurring elements|0.70|0.49|0.41|0.40| 0.36| |ARE 10K non-occurring elements (Infeasible)|n/a|n/a|n/a|n/a| n/a| ## Other Comments Or Suggestions: ### Reply to > The comment (page 4) that sketches typically do not consider skew is misleading... Thank you for pointing this out. We agree that our original phrasing could be clearer in explaining how existing sketches handle skewness. While traditional sketches like CM-Sketch and C-Sketch exhibit different performance under skewed data, this behavior results from their static design rather than active skewness detection. In contrast, Lego Sketch introduces a scanning module that dynamically estimates stream skewness and uses it as prior knowledge to enhance accuracy, as shown in our experiments. We will revise the text for clarity: "Handcrafted sketches face challenges in inferring global stream properties (e.g., distinct item count, skewness) from their compressed storage. While some perform differently under varied skewness, this stems from static design rather than adaptive inference." ## Questions For Authors: ### Reply to > Would your approach be able to support merging of sketches? Like Meta Sketch, Count-Min Sketch, and other sketch-based methods, our Lego Sketch maintains additive properties that fully support sketch merging operations. We will explicitly clarify this consistency in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for your response. Note that counter based sketches do support weights as well as deletions (though accuracy depends on the number of deletions). At least there should be a theoretical discussion of what situations probabilistic sketches give a benefit over deterministic sketches. Considering merging I wonder: Suppose that Alice and Bob independently construct Lego sketches of datasets A and B, respectively. How can these sketches be merged? --- Reply to Comment 1.1.1: Comment: Thank you for your prompt reply. I think your concerns about the relationship between counter- and sketch-based method have been addressed by Berinde and Cormode's work, "Space-optimal heavy hitters with strong error bounds (TODS 2010)" and your concerns about sketch mergability have been addressed by Agarwal and Cormode's work, "Mergeable Summaries (TODS 2013)". ## Reply to > Note that counter based sketches do support weights as well as deletions (though accuracy depends on the number of deletions). At least there should be a theoretical discussion of what situations probabilistic sketches give a benefit over deterministic sketches. **Counter- vs. Sketch-based Method.** As discussed in Cormode and Berinde's works, counter- and sketch-based methods are with different problem scopes. Counter-based method specializes in tasks like heavy hitters, where sketch-based methods are more general in supporting frequency estimation for arbitrary items. For reference, Berinde's work highlights their difference, " A key distinction of sketch algorithms (to counter-based methods) is that they allow both positive and negative updates (where negative updates can correspond to deletions, in a transactional setting, or simply arbitrary signal values, in a signal processing environment)...So, although our results show that counter algorithms are strictly preferable to sketches when both are applicable, there are problems that are solved by sketches that cannot be solved using counter algorithms." **Performance on deletions and negative weights.** Counter-based methods typically requires two separate instances paired with the triangle inequality (see Anderson's work "A high-performance algorithm for identifying frequent items in data streams, IMC 2017"). Their error scales with $\sum |f_i|$, whereas sketch-based methods achieve error proportional to $\sum f_i$. In dynamic streams (turnstile models) or situations with prevalent negative weights, where $\sum |f_i| \gg \sum f_i$, sketch-based methods offer significant advantages. Our work falls under sketch-based methods. More detailed theoretical comparison between general counter-based methods and sketch-based methods is beyond the scope of this work. Based on your suggestion, we will include a discussion of their relationship with counter-based methods in the related work section. ## Reply to >Considering merging I wonder: Suppose that Alice and Bob independently construct Lego sketches of datasets A and B, respectively. How can these sketches be merged? The mergeability of sketch-based methods has been stated in Agarwal and Cormode's work. In a nutshell, sketch-based methods enable the merging of identically configured sketches through bucket-wise addition, effectively summarizing sketches of multiple datasets in a single sketch. This follows their hash-based additive storage mechanism. For example, in a Count-min sketch, each bucket is updated as: hashed_bucket = hashed_bucket + $f_i$. The merging process of two datasets A and B can be demonstrated by the implementation of the ```Join``` function in the publicly Python library ```Probables```, which provides probabilistic data structures. This function simply sums the corresponding buckets from the two sketches: ``` def join(self, second: "CountMinSketch") -> None: ... size = self.width * self.depth for i in range(size): tmp_els = self._bins[i] + second._bins[i] ... self._bins[i] = tmp_els ``` Similarly, LegoSketch employs a hash-based additive storage mechanism for memory blocks. As a result, merging two LegoSketch instances corresponding to datasets A and B follows the same bucket-wise summation approach. Thank you once again for your timely feedback. If you have any further concerns, please feel free to reach out. We are happy to provide additional clarifications or experiment results.
Summary: In this paper, authors have proposed the Lego Sketch, a novel neural sketch designed for data streams. LegoSketch utilize hash embeddings, scalable memory (spread the total space budget across multiple memory blocks, and avoid retraining), memory scanning, and ensemble decoding. During the training phase, the authors have also introduced a self-guided weight loss. LegoSketch are compared with Count-Min, Count Sketch, Meta-Sketch, D-CMS (Elastic Sketch?) and Learned Augmented Count-Sketch on five real-world datasets and six synthetic data streams (following Zipf distribution but with different skewness). As shown in Figure 6, Lego Sketch has achieved 12% - 80% lower estimation errors. In addition, under distribution shift, Lego Sketch has demonstrated much better robustness compared to previously learned linear sketch (Meta-Sketch and Learned Augmented Count Sketch). Claims And Evidence: The methodology is sound and I find most of the claims to be supported. However, there are a few claims that are problematic. 1) At page 4, the authors claim that "handcrafted sketches face challenges in reconstructing stream global characteristics from the compressed storage". I agree that use sketches to reconstruct global statistic and utilize the statistic to improve query accuracy is challenging. There are such studies. I think these works need to be discussed in the paper. Please see: Ting, Daniel. "Count-min: Optimal estimation and tight error bounds using empirical error distributions." SIGKDD 2018, and Chen, Peiqing, et al. "Precise error estimation for sketch-based flow measurement." IMC 2021. 2) Based on Theorem 4.3: $P(Err > \epsilon N) < (\epsilon d_{2})^{-1}$. If we denote the failure probability to be $\delta$, then $d_{2}$ is $(\epsilon \delta)^{-1}$. To solve the frequency estimation problem, the total space required for lego sketch is $K$ $d_{1}$ $(\epsilon \delta)^{-1}$ which is larger than the $1/\epsilon log(1/\delta)$ space requirement for Count-Min and Count Sketch. While Lego Sketch has shown good accuracy in practice, from the theory side, it does not offer "superior space-accuracy trade-offs, outperforming existing Sketches" (in abstract). Methods And Evaluation Criteria: The evaluation looks good to me. and lego sketch has demonstrate high accuracy in different data domains. Theoretical Claims: I didn't not check the proofs in detail, but the theorem 4.1-4.3 are sound. Experimental Designs Or Analyses: Is D-CMS (page 6) referring to Elastic Sketch? Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: Although some of the techniques are not new, given the strong accuracy advantage of Lego Sketch compared with other learned sketches (Meta Sketch and Learned Augmented Sketch), this work can spark novel research directions in scalable, modular neural architectures for real-time data stream processing. Essential References Not Discussed: This paper has several foundational work in this space. Please add references to data summaries in the insertion-only model (e.g., MG summary, SpaceSaving, Lossy Counting), and consider to add discussions on the streaming model for LegoSketch (insertion-only, turnstile, bounded deletion). In section 3.3, the author has mentioned sketches using filtering framework to separate cold and hot items and discussed Elastic Sketch. The author should also consider to add references to related works [1-3]. In addition to Learned Augmented Count Sketch and Meta Sketch, please also add discussion on learning-based frequency estimation [4]. [1] Roy, Pratanu, Arijit Khan, and Gustavo Alonso. "Augmented sketch: Faster and more accurate stream processing." Proceedings of the 2016 International Conference on Management of Data. 2016. [2] Zhou, Yang, et al. "Cold filter: A meta-framework for faster and more accurate stream processing." Proceedings of the 2018 International Conference on Management of Data. 2018. [3] Zhao, Fuheng, et al. "Panakos: Chasing the tails for multidimensional data streams." Proceedings of the VLDB Endowment 16.6 (2023): 1291-1304. [4] Shahout, Rana, and Michael Mitzenmacher. "Learning-based heavy hitters and flow frequency estimation in streams." 2024 IEEE 32nd International Conference on Network Protocols (ICNP). IEEE, 2024. Other Strengths And Weaknesses: S1. The methodology of Lego Sketch is sound. S2. The authors have conducted extensive experiments and demonstrate the strong estimation accuracy of Lego Sketch. S3. Comparing to other learned sketch, LegoSketch is both robust to distribution shift through hash embedding and avoid diminishing return using self-guided loss. W1. Some of the claim are not supported. (see `Claims And Evidence`) W2. Missing important references. Other Comments Or Suggestions: If D-CMS is Elastic Sketch, then I prefer the authors to use the name proposed by the original author. Questions For Authors: 1) At the Scalable Memory section, the authors state "using a hash function to uniformly distribute items in streams across these bricks." The uniform distribution is on the cardinality and not on the frequency. Should the memory block size be dependent on the item's aggregated count (i.e., have memory block of varying sizes and use larger block for larger sub-stream)? I can also see the argument for it to be only dependent on cardinality, but I'm curious about the authors opinion. 2) Please also see W1 and W2. Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Privacy and Security'] Ethical Review Concerns: Given AOL dataset has leaked user information (See https://en.wikipedia.org/wiki/AOL_search_log_release), can experimental results based on AOL be included in the paper? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review and constructive feedback! We have addressed your concerns below and are happy to discuss further if needed. ## Claims ### Reply to > ...I agree that use sketches to reconstruct global statistic... There are such studies..... Thank you. While these methods rely on handcrafted rules to infer error distributions from bucket values, our Scanning Module leverages end-to-end training to directly reconstruct global stream characteristics, such as skewness and cardinality, from bucket values. Intuitively, directly reconstructing global stream characteristics is more effective, as they are the root cause of error distributions. That said, we may also explore adapting our end-to-end approach for error distribution reconstruction in the future. We will revise the descriptions to include discussions of these related methods. ### Reply to > While Lego Sketch has shown good accuracy in practice, from the theory side, it does not offer "superior space-accuracy trade-offs".. As discussed in Section 4.3 and the Limitations Section (Appendix A), our theoretical bounds are indeed looser than those of handcrafted sketches. However, our key theoretical contribution is establishing the first formal bounds for pure neural sketches, unlike meta-sketches, which don't have theoretical guarantees. Following your suggestion, we will revise the abstract to clarify the "superior space-accuracy trade-offs" is empirically supported. ## References: ### Reply to >Add references to insertion-only model and consider to add discussions on the streaming model for LegoSketch. Lego Sketch, like Meta Sketch, Count-Min Sketch, and Count Sketch, operates in the turnstile model using additive memory buckets, which naturally support arbitrary deletions. We will make this clearer in the revised manuscript. And we will add a dedicated subsection in the Related Work section to discuss insertion-only data summarization techniques (e.g., MG summary, SpaceSaving, Lossy Counting). ### Reply to > ... The author should also consider to add references [1-3]. As noted in Section 3.3 (References in Lines 502 and 510), we have already included references to derivatives like Augmented Sketch and Cold Filter, citing representative works (Zhou et al., 2018; Roy et al., 2016; Hsu et al., 2019; Aamand et al., 2024). Following your suggestion, we will additionally cite: Zhao, Fuheng, et al. "Panakos: Chasing the tails for multidimensional data streams." This will help readers better understand the technical lineage and connections between these approaches. ### Reply to >Add discussion on Shahout, "Learning-based heavy hitters and flow frequency estimation in streams." Thank you for bringing this recent work to our attention. The paper employs learning-based methods to separate high/low-frequency items, thereby improving the **insert-only** SpaceSaving algorithm. However, this approach operates in a different streaming model **(insertion-only)** with our proposed sketch method **(turnstile)**. We will include it in the revised Related Work section as an learning-enhanced variant of the insertion-only methods. ## Other Comments Or Suggestions: ### Reply to > I prefer the authors to use the original name of Elastic Sketch. Thank you. We will adopt the original naming convention and refer to them as Elastic Sketch (CMS/Lego) to improve clarity. ## Questions: ### Reply to >Discussion on whether memory block size can be determined based on frequency or cardinality. We appreciate the insightful question. Our current design of Lego Sketch employs uniform item distribution across fixed-size memory blocks based on cardinality. As formalized in Theorem 4.2, this ensures that the skewness distribution within each sub-block closely matches the global stream skewness. Consequently, all blocks exhibit similar statistical properties (skewness, cardinality, and memory allocation) - a desirable feature for scalable memory design, as it ensures stable statistical characteristics during scaling memory. Indeed, memory block size does not have to be determined solely by cardinality. A frequency-aware approach could enhance efficiency by assigning larger blocks to high-frequency substreams but may introduce risks under distributional shifts. Although such strategies have potential benefits, they fall outside the scope of our current work. We would be glad to explore this direction in future — for example, using learning-based predictors to separate high- and low-frequency items and adjust block sizes accordingly. ## Ethical Concerns: ### Reply to > AOL dataset has leaked user information... Thank you for pointing this out! This dataset has been widely used in prior studies, so we followed this practice without realizing the issues. We will remove the results derived from this dataset. The remaining four real-world datasets and six synthetic datasets would be sufficient to demonstrate the superior performance of our work.
Summary: This paper proposes a method for estimating the frequency of items in a data stream by means of sketching of embedding vectors of the items. It claims scalable memory use by means of multiple "bricks". It compares with hand-crafted and neural sketch methods on a number of datasets. Claims And Evidence: The proposed method is found to have consistently lower error metrics as a function of memory budget relative to the competing methods, on a number of datasets. It is also found to have superior robustness to distribution shift on synthetic datasets. Methods And Evaluation Criteria: I am not familiar with the datasets in this area, but the metrics are simple and reasonable. Theoretical Claims: The proof of Theorem 4.1 appears correct. Experimental Designs Or Analyses: I am not expert in this area, but all experiments seem sound. I do not find any issues. Supplementary Material: I checked the proof of Theorem 4.1. Relation To Broader Scientific Literature: I am not an expert on sketching. Thus, I cannot say much on the novelty of the approach, the benchmarks, the competition and the state of the art. However, simply by reading the method, it is hard to see that this is a novel method. Every component is very simple. For example, the authors claim to introduce a novel "normalized multi-hash embedding". But this is just some standard embedding followed by some standard hashing followed by normalization. It is extremely difficult to imagine that this is novel. Same goes for all components. Despite not seeing any novel idea, I rate the paper with weak accept, simply because of my lack of expertise on the subject. ## Post-Rebuttal Given my lack of familiarity with the subject and the positive scores of the other reviewers, I am keeping my score. However, on my concern on novelty, I strongly advise the authors to revise their manuscript according to the post-rebuttal discussion and according to their commitments: That they will specify the technical novelty of each component and of the entire approach relative to the state of the art. Not just qualitative properties of the methods, but in terms of ideas, methods and algorithms. For example, "Scaling" an existing idea is not really new. Essential References Not Discussed: As I am not an expert on hashing, I cannot say if essential references are missing. Other Strengths And Weaknesses: The text is very well written and easy to follow. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback! We have provided responses below and would be happy to engage in further discussion if needed. Neural sketch represents a promising new direction in the sketch field long dominated by handcrafted designs. Existing neural sketches such as Meta Sketch demonstrate feasibility but suffer from limited scalability, insufficient accuracy, and poor deployability. Lego Sketch takes the critical next step—by introducing architectural and training innovations that make neural sketch accurate, scalable, and practically usable. This transforms neural sketch from a conceptual idea into a deployable core technique for data stream processing. More specifically, both handcrafted and neural sketch structures are designed to prioritize simplicity and effectiveness, using minimal memory to cope with infinite data streams. For over a decade, even the latest handcrafted sketches have primarily relied on a few hash functions and a two-dimensional array with straightforward decoding rules. Similarly, leading neural sketches like Meta Sketch rely on basic architectures, using simple MLPs across modules. This highlights the core challenge in sketching: achieving practical scalability with limited space and high accuracy. Lego Sketch introduces significant advancements in this regard, focusing on the emerging domain of neural core sketching. Unlike Meta Sketch, which suffers from limited scalability and frequent retraining needs, Lego Sketch incorporates scalable embedding and memory mechanisms, ensuring seamless deployment without retraining. This fundamental contribution directly addresses a major limitation in the field. Moreover, Lego Sketch demonstrates technical novelty through its structural design. Modules such as the Ensemble Decoding Scanning Module and the self-guided loss contribute to accuracy improvements, as evidenced by comprehensive ablation studies. Experimental results further validate its competitive throughput, underscoring its balanced performance. By refining all components beyond Meta Sketch, Lego Sketch advances the state of the art in neural sketching. --- Rebuttal Comment 1.1: Comment: I thank the authors for the feedback. Given my lack of familiarity with the subject and the positive scores of the other reviewers, I believe I will keep my score. However, on my concern on novelty, I have to say that the authors' response is purely on qualitative properties of the methods. What is needed is to understand novelty in technical terms, in terms of ideas, methods and algorithms. "Scaling" an existing idea is not really new. Such discussion should be in the paper, not just the rebuttal. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback! We will provide a clear explanation highlighting the novelty of each component in the Methodology Section of revised manuscript. Specifically: - The **Scalable Embedding** leverages a normalized multi-hash technique to address, for the first time, the cross-domain generalization challenges of Neural Sketch, with theoretical analysis in Section 4.1. - The **Scalable Memory** dynamically expands memory capacity via memory block stacking, overcoming previous scalability bottlenecks while ensuring prediction accuracy, with analysis in Section 4.2. - The **Scanning Module** proposes an entirely new approach to reconstructing stream characteristics via end-to-end training, aiding the decoding process and validated through ablation studies. - The **Ensemble Decoding** achieves more accurate estimation and provides the first theoretical error guarantee for Neural Sketch, as described in Section 4.3. - The **Self-guided Weighting Loss** employs self-supervised weighting to effectively balance Lego Sketch training across diverse data streams, addressing degradation issues in prior Neural Sketch under certain streams. We will also highlight the novelty of each module more prominently in both the Introduction and Conclusion sections.
Summary: This paper introduces the Lego Sketch, a neural network-augmented sketch for frequency estimation. The sketch consists of several learnable components: a variant of the hash embedding layer, a "memory scanning" module that estimates global characteristics of the stream, and an "ensemble decoding" module that returns an estimated item frequency using a decoder network. The design of the Lego Sketch allows for the size of the memory data structure to be modified without retraining the learnable components of the sketch. The empirical evaluation compares the proposed sketch against several classical and learned baselines, demonstrating improved frequency estimation at lower memory budgets. Claims And Evidence: The main claim in the paper is that the proposed sketch improves on previously proposed approaches for the frequency estimation task. This claim is supported by the empirical evaluation, which shows that the Lego Sketch improves on both "handcrafted" sketches and the neural network-augmented Meta-sketch in both absolute and relative error metrics. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are reasonable. Theoretical Claims: I did not check the correctness of the paper's theoretical claims. Experimental Designs Or Analyses: - Supplementary Material: I reviewed the supplementary material, but did not check the included proofs in detail. Relation To Broader Scientific Literature: This paper is related to the literature on memory efficient sketching algorithms for frequency estimation in data streams, and to more recent work on neural network-augmented sketching methods. Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: - The legibility of the figures in Sec. 5 should be improved. With the current scaling of the y-axis, it is difficult to distinguish between the accuracy curves in several subplots. Questions For Authors: 1. At lower memory budgets, the memory overhead of the sketch's neural network components becomes a larger fraction of overall memory usage. This is of particular concern when deploying the sketch on resource constrained edge devices. How much memory do the Lego Sketch's NN components use, and is the Lego Sketch still competitive with the handcrafted baselines when this memory overhead is included? 2. Given the goal of reducing memory usage, it is natural to consider using lower precision numerical representations. What is the numerical precision used in the experiments reported in the submission? 3. Following up on the previous question, how well does the sketch perform when its NN components are quantized to lower precision to reduce memory consumption (e.g., to 4 bits per parameter)? How well does the sketch perform when the buckets of the "scalable memory" are quantized to lower precision? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Below, we address your concerns in detail. ## Other Comments Or Suggestions: ### Reply to > The legibility of the figures in Sec. 5 should be improved. Thank you for your suggestion. We will improve the readability of the figures accordingly. ## Questions For Authors: ### Reply to >Question 1: How much memory do the Lego Sketch's NN components use, and is the Lego Sketch still competitive with the handcrafted baselines when this memory overhead is included? The NN components of Lego Sketch are highly space-efficient, containing only around 5K parameters, which equates to a storage size of 20KB with 32-bit precision. Given that the total memory usage in our experiments ranges from 0.3MB to 140MB, the NN components contribute only a negligible fraction, i.e., between 6.7% and 0.014% of the total memory. Even when accounting this overhead, Lego Sketch remains highly competitive and significantly outperforms handcrafted baselines. For example, on the LKML dataset, with a total usage of 620KB (600KB for storage plus 20KB for NN components), Lego Sketch records an AAE of only 2.55, which is lower than CM (2.58) and C Sketch (2.95), even when these baselines use 1100KB of memory. | LKML Dataset, AAE | 600KB | 1100KB | 1600KB | 2100KB | 2600KB | 3100KB | 3600KB | |------|------|------|------|------|------|------|------| | Lego Sketch(+20KB) | 2.55 | 1.67 | 1.03 | 0.61 | 0.3 | 0.18 | 0.11 | | CM Sketch | 6.75 | 2.58 | 1.33 | 0.78 | 0.5 | 0.35 | 0.25 | | C Sketch | 5.36 | 2.95 | 1.95 | 1.41 | 1.08 | 0.85 | 0.69 | Generally, the NN overhead for neural sketches is typically not accounted for, similar to other neural network-augmented methods, like learned sketches and Meta sketch. This is because this overhead can be further amortized across multiple deployed instances. ### Reply to >Question2:What is the numerical precision used in the experiments reported in the submission? Lego Sketch uses the default 32-bit precision, and with only 5K parameters, the memory overhead is negligible. ### Reply to > Question3:how well does the sketch perform when its NN components are quantized to lower precision to reduce memory consumption (e.g., to 4 bits per parameter)? How well does the sketch perform when the buckets of the "scalable memory" are quantized to lower precision? Neural sketch's memory module requires frequent updates, which will frequently triggers quantization and dequantization operations when updating the quantized memory. This significantly impacts throughput. Future research could focus on designing efficient quantization algorithms tailored for high-update scenarios. For the NN components, this could be feasible. However, Lego Sketch has only 5K parameters, which is significantly small—consuming just 0.014% to 6.7% of the total memory storage in experiments. Compared to Meta Sketch, another neural sketch with 52K parameters, Lego Sketch is smaller by an entire order of magnitude. Thus, further quantization of the NN components provides minimal benefit. Following your suggestion, we attempted reducing precision from 32-bit to 16-bit, saving 10KB in storage of NN components. However, under small memory budgets, the error notably increased—e.g., under 600KB, Lego Sketch's error rose from 2.55 to 6.23. In contrast, for larger memory budgets, such as above 2600KB, the error remained consistent within two decimal places. | LKML Dataset, AAE | 600KB | 1100KB | 1600KB | 2100KB | 2600KB | 3100KB | 3600KB | |------|------|------|------|------|------|------|------| | Lego Sketch(+20KB, 32bits) | 2.55 | 1.67 | 1.03 | 0.61 | 0.3 | 0.18 | 0.11 | | Lego Sketch(+10KB, 16 bits) | 6.23 | 2.29 | 1.05 | 0.61 | 0.3 | 0.18 | 0.11 |
null
null
null
null
null
null
The Minimal Search Space for Conditional Causal Bandits
Reject
Summary: his paper explores the problem of minimizing the search space in CB with single-node interventions, covering both do-interventions and conditional interventions. It presents an algorithm for efficiently identifying the minimal globally interventionally superior set, with experiments demonstrating the empirical benefits of intervening solely on the identified sets. Claims And Evidence: Most likely. Methods And Evaluation Criteria: Yes. Theoretical Claims: The conditions/assumptions needed for proposition 4 seems missing while the rest seems correct. Experimental Designs Or Analyses: The experimental design is unclear to me (see Questions). Supplementary Material: Yes, I reviewed some parts of the proof and codes. I didn't check line by line. Relation To Broader Scientific Literature: The LSCA closure serves as a good supplement to PIMOs in the context of hard interventions in single node intervention. Essential References Not Discussed: 1. The paper mentioned soft intervention multiple times, and compares them with conditional interventions. However, prior work on causal bandits with soft interventions (from R1 to R2) is missing, which could provide valuable context and comparisons. [R1] Varici et al. Causal bandits for linear structural equation models. JMLR. [R2] Yan et al. Linear Causal Bandits: Unknown Graph and Soft Interventions. NeurIPS 2024 Other Strengths And Weaknesses: Strengths 1. Paper looks into minimum search space for single node, which is not done by previous studies. 2. Theoretical results are given through intuitive. Weaknesses 1. The paper looks into single node intervention without confounders, which seems to be limited and simple. Could the ideas of this paper applied to 2 node intervention and beyond? 2. The two main contributions: (i) the equivalence between conditional and atomic interventions and (ii) the identification of the LSCA closure—seem somewhat separate. I did not seeing the importance of conditional intervention in this paper and it is defined unclear. Other Comments Or Suggestions: I think a clearer statement of conditional interventions and the experimental setup would be great, as some aspects remain unclear (see Questions). Questions For Authors: have a few questions regarding the setting and definition of Conditional Interventions and related concepts: 1. For the policy $g$, is it a fixed function, or does it belong to a set of functions (as Definition 1 states that a policy exists)? Additionally, is the policy (or the set of possible policies) known to the learner when assigning interventions? 2. Is $\mathbf{Z}_X$ a fixed function, or can it vary when intervening on the same node $X$? Furthermore, is it known to the learner? Does the definition 1 is a superior conditioned on $\mathbf{Z}_X$? 3. What assumptions are made about $g$? For example, for Proposition 4 to hold, it seems necessary that the output range of $g$ matches the do intervention range. Otherwise, if $g(\cdot)=0$, Proposition 4 may become ill-posed. 4. Does the framework assume full intervention capability? That is, can the intervention candidate take any value within the observed range of $X$ when targeting node $X$? 4. The algorithm C4 is unclear to me. Could the authors provide an explanation of how C4 iteratively finds C and $\mathfrak{c}$ using the example in Figure 3? 5. Am I correct in understanding that do-interventions were used in the experiments? Specifically, in the _cond_int_cbn_mab.py file, the do function from the pgmpy package appears to be used. What is the intervention space of MAB, is it binary or discrete or continuous? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review and questions. We are happy to read that you appreciated the intuition given for our theoretical results. Your suggestions for definition clarifications will be taken into account in the camera-ready version, if the paper is accepted. Further, thank you for pointing out these references which we were not aware of. They seem relevant, and we will carefully consider them and their relationship to our paper. As a first comment, and to avoid misunderstandings, please note that our work is about single-node conditional interventions, but not about atomic interventions. As mentioned in the paper, these results are also valid "for atomic interventions in a **deterministic** causal model". We make no statements about atomic interventions in general causal models. Furthermore, when it comes to the conditions of Proposition 4, please read our answer to your third question. In summary, Proposition 4 does not need any restrictions on the policies other than what had been stated in the paragraph on conditional interventions in the Preliminaries section. We will now address the weaknesses and the questions, in the order they appear: ## Answers to Weaknesses: 1. Please note that we do allow for confounders, but not unobserved confounders. We agree that allowing also for unobserved confounders would be an interesting research direction, but it is out of the scope of this already long paper, and would demand significantly more work. See also our answer to question 1 of the reviewer pQQk. Also, as described in the last paragraph of Section 2, the fact that we focus on single-node interventions actually renders our problem significantly complicated (and certainly more complicated than if one allowed for $K > \vert Pa(Y) \vert$ interventions, in which case one would simply need to intervene on all parents of $Y$, as noted by Lee and Bareinboim (2018)). For us, the complexity of our problem comes precisely from allowing single-node interventions and conditional interventions. We also agree that extending this work to K-node interventions with $1 < K < \vert Pa(Y) \vert$ would be an interesting (but again non-straightforward) next research question - please see our answer to question 3 of reviewer pQQk. 2. We do not claim an “equivalence between conditional and atomic interventions”, but an “equivalence between the superiority partial orders for conditional interventions and atomic interventions in deterministic causal models”. Furthermore, although one may consider this as an important contribution, the main result is the graphical characterization of the mGISS. This equivalence (Proposition 4) is used to gain intuition about (and also in proving) the graphical characterization of the mGISS, as mentioned in the paragraph after Remark 5. Thus, this equivalence is a step in the direction of proving that the LSCA closure is indeed the mGISS, as explained in the paper. Finally, conditional interventions are important because they better model many real-world situations, as described in the third paragraph of the Introduction and in the references cited therein. ## Answers to Questions: 1. We do not impose any restriction on the policy $g$. It can be any function with the appropriate domain/codomain as described in the Preliminaries section. Accordingly, the learner can choose any policy. 2. Yes, $\mathbf{Z}_X$ is fixed for each $X$. Also, Definition 1 is indeed stated with respect to a $\mathbf{Z}_X$. Notice however that from Proposition 4 the superiority partial order must be independent from the specific choice of observable conditioning set. We can emphasize these points in the paper. 3. As mentioned in point 1., no assumptions are needed to make about $g$ other than its domain and codomain being $R_{\mathbf{Z}_X}$ and $R_X$, as written in the Preliminaries. Proposition 4 does not need any extra restriction on the policies to hold/be well-posed. 4. Yes, $X$ can be set to any value in its range. We can emphasize this in the Preliminaries section. 5. Covering the entire Figure 3 would be too long for this character-limited rebuttal, so we will focus on the nodes J, K, G, H and E of the graph in Figure 3. Due to line 4 of Algorithm 1, the connectors of J and K are $\mathfrak{c}[J] = J$, $\mathfrak{c}[K] = K$, respectively, since $\mathbf{U} = \\{ J, K \\}$. Now, for node G, the set C (which contains the connectors of the children of G) is $C=\{J\}$, so that $\vert C \vert = 1$ and by lines 8, 9 of the algorithm one has $\mathfrak{c}[G] = J$. Similarly, $\mathfrak{c}[H] = K$. Finally, for node E one has $C = \\{J, K\\}$, so that by lines 10 and 11 of the algorithm $\mathfrak{c}[E] = E$, and $E$ is added to the closure. 6. We use conditional do-interventions. Notice that, with pgmpy methods, the context of a conditional intervention is encoded in the "evidence" argument. Also, the MAB intervention space is discrete (and not necessarily binary). --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. However, I’m still a bit confused by the statement *"We do not impose any restriction on the policy $g$"*. For example, the function $g$ could be defined independently of the inputs $\mathbf{Z}_{X}$ so that $do(X= g(\mathbf{Z}_X))=c$, where $c$ can be *any value* within the range. Here I have some questions. - Is the intervention means two aspects? The target of intervention and the selection of $g$ function - How is the mean reward $\mu_a$ defined, is it a function of $g$ (as it seems need to be a function of $c$)? - How is $g$ selected in C4 algorithm? --- Reply to Comment 1.1.1: Comment: Thank you for your comment. Indeed, we can choose $g$ to be a constant function equal to $c$, where $c$ can be any value in the range of $X$. This is the same as performing the atomic intervention $do(X = c)$. (Are we right to understand that there was a typo in your comment when you wrote $do(X = g(\mathbf{Z}_X)) = c$? We believe you wanted to write $do(X = g(\mathbf{Z}_X) = c)$, which indeed is possible and is just the atomic intervention $do(X = c)$). We now address each of your questions: 1. Yes, to specify a conditional intervention one needs to select which node $X$ to intervene on, and to choose a policy $g$. 2. The mean reward $\mu_a$, being the expected value of $Y$ if arm $a$ is selected, depends on the conditional intervention (arm) $a$ that is chosen. Since $a$ is characterized by a node $X$ and a policy $g$ (as described in point 1. above), indeed $\mu_a$ depends on $g$. Explicitly, we can write: $\mu_a = \mathbb{E}[Y \mid do(X = g(\mathbf{Z}_X))] = \sum_y p(y\mid do(X = g(\mathbf{Z}_X)) \cdot y$. Notice that in the case of an atomic intervention $do(X = g(\mathbf{Z}_X) = c)$ this reduces to the atomic intervention mean reward $\mathbb{E}[Y \mid do(X = c)] = \sum_y p(y\mid do(X = c) \cdot y$. We can clarify this in the preliminaries section. 3. Please note that the C4 algorithm is an algorithm that finds the minimal set of nodes guaranteed to contain the optimal node on which to perform a conditional intervention (that is, the mGISS of the causal graph). It is not the job of the C4 algorithm to select $g$. Instead, $g$ can be learned using a bandits algorithm for conditional interventions. This bandit algorithm can be restricted to only consider conditional interventions on the nodes found by the C4 algorithm. In Figure 4, you can see the impact of restricting a bandits algorithm search to the nodes found by the C4 algorithm on the cumulative regret curves.
Summary: # Summary - This paper is refreshingly nice paper to read, and the authors have taken care to make the paper easily readable. For instance, I am comparing with general papers I read in this area. - In that sense, it is similar to papers by Lattimore which rate high for clarity. - For example, instead of simply stating theorems, the authors provide intuitions behind their claims. - The authors take the problem of causal bandits where interventions are conditional. Unlike prior work where the conditioning gives rise to separate graphs, this is closer to Bareinboim's 2019 work where they define the SCM-MAB problem, wherein the interventions are single node conditional interventions. Claims And Evidence: (a) we establish a graphical characterization of the minimal set of nodes guaranteed to contain the optimal node on which to perform a conditional intervention; - Yes this is shown in Section 4. (b) we propose an algorithm which finds this set, given only the causal graph, with a time complexity of O(|V | + |E|). - Yes this is evidenced and correct. Methods And Evaluation Criteria: I especially liked the fact that they chose to gather their graphs from real-world datasets. The number of nodes may go up to 20, which seems reasonable given the claim of extending to real world applications. Theoretical Claims: Their theoretical claims were proved to the best of my reading. I went through the main paper, as well as parts of the appendix. Experimental Designs Or Analyses: The authors have performed experiments on graphs derived from real world datasets. So there is no intrinsic bias towards their algorithm through instance selection. Supplementary Material: The proofs are well formulated. I checked a couple of them and found no errors. Relation To Broader Scientific Literature: The authors position their work well with respect to recent literature in the area of causal bandits. Many relevant related works have been discussed. Essential References Not Discussed: The authors may want to add references to recent works in minimal intervention sets over MECs. This is not exactly the same problem as the authors are trying to address, but is related. Other Strengths And Weaknesses: # Strengths 1. High clarity of writing. 2. This generalizes previous works in the area. # Weaknesses: 1. The graph does not contain any latents. As the authors point out, extending to unobserved confounders would be an interesting direction of future work. Other Comments Or Suggestions: - The notations with many superscripts and subscripts is at times confusing. In the final manuscript, either simplifying the notations, or adding a table of notations would be useful in case the authors have sufficient space for the same. But i leave this to the authors best judgement. Questions For Authors: - Does the formulation easily extend to the simple regret setting? - What if contexts were vector spaces vs discrete spaces? how do these tie in? Is this an interesting direction? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review and questions. We are glad that you enjoyed reading our paper and that you appreciate our efforts to make the message clear. The only paper we were able that seemingly matches your suggestion to add "references to recent works in minimal intervention sets over MECs" was a technical report from February 2025 by Park, Arditi, Bareinboim and Lee, "Structural Causal Bandits under Markov Equivalence". Is this what you meant or did you have other papers in mind that we missed? About the weakness you mentioned, although we agree that the cases with latent confounding and K-node interventions would be an interesting research direction, each of these topics would, from our perspective, require substantial extra research exceeding the scope of this already long paper, and may be the focus of future research. We will do our best to simplify the notation where possible, and will consider adding a notation table to the appendix. We will now address the questions, in the order they were posed. ## Answers to Questions: 1. Yes, we expect it to also hold for simple regret. Note that the set mGISS that we found is guaranteed to contain the node with the best conditional intervention (best arm), and we expect that, if it is easier for the algorithm to find the best arm, then it is more likely that the regret of the chosen arm at the end of training is indeed the smallest possible. 2. We focused on the case where variables have discrete (and finitely many) values since we are interested in the bandits problem (with finitely many arms). If there were an uncountable amount of possible interventions on a node, one would need to use other frameworks, such as Bayesian Optimization. We do expect our work to be amenable to adaptation to that scenario. So yes, continuous/vector space variables (both to intervene on and as contexts) are an interesting research direction.
Summary: This paper studies the conditional causal bandit problem, where interventions depend on observed variables rather than being fixed. It provides a graphical characterization of the minimal set of nodes that guarantees the presence of the optimal conditional intervention. An efficient algorithm with O(|V| + |E|) complexity is proposed to identify this minimal set using only the causal graph. The correctness of both the characterization and algorithm is formally proven. A key result shows that the same minimal set applies to deterministic causal models with atomic interventions. Empirical results demonstrate that the method significantly prunes the search space in both synthetic and real-world graphs, especially in large, sparse models. Integrating this approach into multi-armed bandit (MAB) algorithms improves efficiency and speeds up convergence. These contributions provide a foundation for more efficient decision-making in causal bandits with conditional interventions. Claims And Evidence: Yes, all claims in the submission are supported by clear and convincing evidence. All the theorems and propositions have been formally stated and proved in the supplementary material. Methods And Evaluation Criteria: The authors conduct experiments to assess the fraction of the search space that can be expected to be pruned using the proposed method in both randomly generated and real-world graphs. Additionally, they demonstrate, using well-known real-world models, that their intervention selection can significantly improve a classical MAB algorithm. Theoretical Claims: I have skimmed through the proofs in the supplementary material for the main theoretical results, and it appears to be sound. Experimental Designs Or Analyses: The experiments appear to be sound and demonstrate that the proposed algorithm significantly prunes the search space and substantially accelerates convergence rates when integrated into standard multi-armed bandit algorithms. Supplementary Material: I have skimmed through the proofs in the supplementary material for the main theoretical results, which include Sections C through E. Relation To Broader Scientific Literature: This paper introduces a novel approach to causal bandits by focusing on conditional interventions, which allow the value of the intervened variable to depend on the observed values of other variables. Unlike previous work on hard interventions, where variables are set to fixed values, this method provides a more flexible and realistic framework for decision-making in various applications such as healthcare and personalized recommendations. The authors present a graphical characterization of the minimal set of nodes needed to find the optimal conditional intervention, along with an efficient algorithm for identifying this set. Their contributions include reducing the search space for optimal interventions and demonstrating substantial improvements in convergence rates when integrated into standard bandit algorithms. This work addresses a gap in the literature by fully characterizing the minimal search space for single-node conditional interventions, offering a new perspective on intervention selection in causal models. Essential References Not Discussed: The paper cites and discusses all necessary and relevant prior work. Other Strengths And Weaknesses: I have listed the main strengths and contributions of the paper in the Summary section and the section "Relation to Broader Scientific Literature." In terms of main weaknesses, some assumptions restrict the practical applicability of the work, including no latent confounder assumption, and considering single-node interventions only. The work also assumes that the conditioning set is already known to the agent, whereas in many cases, it may not be clear what conditions to use in advance. Other Comments Or Suggestions: I did not see any major typos or other related issues in the manuscript. Questions For Authors: I have some questions for the authors: * The authors assume that there are no latent variables and leave the extension of this work to the case where latent variables are present as future work. The work by Lee and Bareinboim (2018), which characterizes the set of possibly optial arms when confounders are present. Is it possible to use insights from this work to relax this assumption in the current setup. Could the authors elaborate on this? * Also, the causal graph needs to be known for the proposed approach. Suppose the causal graph is unknown—can one use targeted causal discovery methods, like in the paper https://arxiv.org/pdf/2301.11401, to relax this assumption in the current setup? * The authors focus on single-node interventions. What about multi-node interventions, say up to K nodes, simultaneously? Is it possible to use a similar approach in this case? * In the experimental section, the authors use a per-context UCB-style algorithm. Is it possible to transfer information across contexts and accelerate the learning process? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review and questions. We are glad to read that you found our arguments clear and convincing. About the weaknesses that you mentioned, we agree that the cases with latent confounding and K-node interventions would be an interesting research direction. From our perspective, each of these topics would require substantial extra research exceeding the scope of this already long paper, and may be the focus of our future research. Please note that, although the conditioning set needs to be known when performing a conditional intervention, it does not need to be known to establish the minimal search space - from Proposition 4, the superiority partial order must be independent from the specific choice of observable conditioning set. We will now address the questions. ## Answers to Questions: 1. Yes, we believe it may be possible to adapt parts of the work by Lee and Bareinboim (2018) the the single-node, conditional intervention case, but we do not see a straightforward way to do it. In particular, we expect a notion related to their concept of interventional border to be useful in our case, but exactly how would need substantial further research. 2. Yes, we do not see a problem with combining such causal discovery methods in order to tackle problems where the causal graph is unknown in order to allow the use of our method. We can make this explicit in our Discussion/Conclusion section. 3. Yes, we expect that a similar approach would be possible for the K-node intervention case, but not in a trivial way: for example, we do not expect the minimal search space to simply be the cartesian product of the LSCA closures, as one could at first suspect. 4. Possibly, but that would require us to make assumptions about the connection between said contexts. We took the general approach of not making any such assumptions. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal and agree that the points I mentioned in weaknesses and questions can be addressed in future work. Based on the contributions in the paper and rebuttal, I will maintain my acceptance score.
null
null
null
null
null
null
null
null
EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM
Accept (poster)
Summary: Leveraging the multi-image comprehension and instruction-following capabilities of the multimodal large language model (MLLM), this paper studies the personalization of diffusion models. It utilizes MLLM to capture the visual elements based on the provided reference images and instruction. The proposed framework could achieve style customization, character customization, and person id customization. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: N/A Experimental Designs Or Analyses: no issues. Supplementary Material: all parts. Relation To Broader Scientific Literature: The proposed method and models could be used to achieve controllable image generation. Essential References Not Discussed: This paper proposes to use MLLM to achieve controllable image generation. However, there are some methods [1,2] also using MLLM to achieve image editing. Although the tasks are different to some extent, both share the similar ideas. In addition, [3] using MLLM to achieve image generation should also be discussed. [1] GUIDING INSTRUCTION-BASED IMAGE EDITING VIA MULTIMODAL LARGE LANGUAGE MODELS? ICLR'24 [2] SmartEdit: Exploring Complex Instruction-based Image Editing with Multimodal Large Language Models. CVPR'24 [3] UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion. ACL'24 Other Strengths And Weaknesses: Strengths: This paper builds a multi-reference image generation benchmark, which is helpful for the community. The proposed method achieves better performance than previous works. Weaknesses: - The idea is not new. In Tab.1, the paper makes comparisons between the proposed method and some previous methods. While Tab.1 shows the proposed method can support more functions, the difference between it and KOSMOS-G / MoMA regarding architecture and motivation is slight. For example, regarding the "multiple reference images", I believe such capability could be achieved by using a MLLM which supports the input of multiple reference images. The proposed method achieves better performance, which I believe makes sense, but the novelty is limited. - In addition, there are some additional similar works not discussed [1,2,3]. [1] GUIDING INSTRUCTION-BASED IMAGE EDITING VIA MULTIMODAL LARGE LANGUAGE MODELS? ICLR'24 [2] SmartEdit: Exploring Complex Instruction-based Image Editing with Multimodal Large Language Models. CVPR'24 [3] UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion. ACL'24 Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer gzGG, Thanks for your advice. We will address your concerns below. **Q1. Missing references and discussions.** Thank you for raising this point. We have discussed the differences between these MLLM-based frameworks and EasyRef in the Q1 of rebuttal for Reviewer nUGh. We will cite and discuss these image editing and generation methods with MLLMs in the Related Work Section of our revised manuscript. **Q2. The novelty of this paper.** Please refer to the Q1 of rebuttal for Reviewer nUGh. **Q3. Multi-reference consistent generation capability of other MLLM-based frameworks.** 1. Multi-image consistent generation is not achievable simply by supporting multi-image input in a framework; it requires architecture optimization, novel training algorithms, and collaborative integration with data construction to achieve the desired results. 2. Kosmos-G is another MLLM-based framework that can accept multi-image input. However, it focuses more on the composition of distinct elements and lacks the capability to extract consistent elements and generate consistent images based on those elements. For example, in its official setting for benchmarking DreamBench, which contains multiple images per instance, Kosmos-G uses only a single reference image. In addition, as shown in our comparison with EasyRef on the multi-image consistent generation task (https://github.com/anonymous-projectuser/image), visualizations demonstrate that EasyRef produces higher-fidelity and more consistent results. --- Rebuttal Comment 1.1: Comment: Regarding [1] [2] mentioned in the review, the authors do not discuss these two works directly in the response. In the "Comparison with other methods" Part (Sec. 2.4), the authors mentioned "In contrast, we demonstrate that simply using reference tokens in the final layer of the MLLM can provide sufficient reference information and efficiently inject conditions into the U-Net", while both [1] and [2] use learnable tokens in the mllm to guide the denoising process. I acknowledge the contribution in the training scheme (including data construction and optimization strategy), but the basic idea is similar to the previous works. Therefore, I maintain my initial score. [1] GUIDING INSTRUCTION-BASED IMAGE EDITING VIA MULTIMODAL LARGE LANGUAGE MODELS? ICLR'24 [2] SmartEdit: Exploring Complex Instruction-based Image Editing with Multimodal Large Language Models. CVPR'24 --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the feedback. We would like to clarify the following points: **Q1** The idea is similar. 1. We acknowledge that MGIE [1], SmartEdit [2], and our work all utilize diffusion models and MLLMs with new tokens. However, the task addressed by EasyRef differs significantly from these frameworks. While both MGIE and SmartEdit focus on single-image editing with instruction comprehension, EasyRef is specifically designed for general multi-image consistent generation. To achieve this, we introduce several unique designs tailored for this task. The key differences are summarized in the table below: | Method | LLM image input | LLM linguistic input | LLM adapter | LLM distillation | Connector | Diffusion adapter | | - | - | - | - | - | - | - | | EasyRef | Multiple images | [reference instruction; user prompt] | inserting new tokens and enabling bidirectional attention in the final layer | N/A | MLP | Decoupled cross-attention layers | | MGIE [1] | Single image | [edit instruction; new tokens] | New tokens | Distill from Flan-T5-XXL (instruction loss) | Transformer | N/A | | SmartEdit [2] | Single image | [edit instruction; new tokens] | LoRA adapters, new tokens | Distill from CLIP text encoder (align loss) | Cross-attention layers | N/A | 2. MGIE and SmartEdit focus on using the MLLM's single-image and instruction comprehension for instruction-based image editing. In contrast, we leverage the MLLM's multi-image comprehension and prompt comprehension capabilities to understand multi-image contexts and user prompt, capturing consistent elements. Furthermore, our method enables explicit control of the reference encoding process through the MLLM's instruction-following ability. [1] GUIDING INSTRUCTION-BASED IMAGE EDITING VIA MULTIMODAL LARGE LANGUAGE MODELS? ICLR'24 [2] SmartEdit: Exploring Complex Instruction-based Image Editing with Multimodal Large Language Models. CVPR'24
Summary: This paper proposes EasyRef, a plug-and-play method for diffusion models to generate consistent images from multiple references under instruction controls. It uses a multimodal large language model (MLLM) to capture consistent visual elements and introduces an efficient aggregation strategy and progressive training scheme to enhance performance and reduce computational costs. Additionally, a new benchmark (MRBench) is introduced for evaluating multi-reference consistent image generation. Experiments show that EasyRef outperforms existing methods in terms of consistency and generalization ability. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper's contributions are closely related to the broader scientific literature on diffusion models and multimodal large language models (MLLMs). It builds on recent advancements in using MLLMs and diffusion models for image generation, addressing limitations of existing methods in handling multiple reference images and fine-grained details. The introduction of an efficient aggregation strategy and a new benchmark further aligns with ongoing efforts to enhance computational efficiency and standardize evaluation in this field. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. Strengths: The paper is well-organized and clearly motivated, focusing on addressing the limitations of existing methods in handling multiple reference images and fine-grained details. The proposed EasyRef framework and the Multi-Reference Generation Benchmark are both meaningful contributions. The experimental results are also promising. 2. Weaknesses: While it is reasonable to extract consistent visual elements from multiple reference images and text prompts via an MLLM, the EasyRef architecture lacks novelty. Additionally, the authors only discuss the performance gains compared to previous methods but do not analyze the increased computational complexity introduced by using an MLLM to process multiple reference images and text prompts. Other Comments Or Suggestions: Some figures in the paper, such as Figure 3, need significant refinement in terms of aesthetics. Issues such as oversized fonts are currently present. Questions For Authors: 1. The visualizations presented by the authors currently focus on generating results from multiple reference images of the same single target. I am curious whether EasyRef can achieve customized generation when the multiple reference images contain the same two or more distinct targets. 2. The ablation studies in this paper seem somewhat incomplete. The authors does not explore the impact of different LLMs and different diffusion models on the proposed EasyRef framework. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer nUGh, Thank you for appreciating our approach. We will address your concerns below. **Q1. The novelty of this paper.** While we acknowledge that our project does not introduce groundbreaking new architectures, we wish to emphasize that it offers several important conclusions and methods that contribute novel insights to the community that are untouched by prior works. These conclusions and methods include identifying key limitations, developing effective data construction strategies, optimizing architecture and training for efficiency, and creating a new benchmark. Together, they provide a practical blueprint for building and evaluating a simple yet effective pipeline for multi-image consistent generation. We want to highlight three parts of differences. 1. Although some methods also utilize MLLMs and diffusion models, as acknowledged by Reviewer sCdX and nUGh, our design objectives, architecture and training optimizations, and tasks differ fundamentally. Other approaches leveraging MLLMs primarily use them for feature extraction in explicitly defined tasks (e.g., single-subject customization with a single reference image). In contrast, we demonstrate that MLLMs possess strong task instruction and multi-image comprehension capabilities, enabling them to extract consistent information based on instructions and multi-image contexts. Our method emphasizes the flexible understanding and extraction of consistent elements across multiple references, guided by instruction-based control. 2. We demonstrate that EasyRef with a simple architecture, can serve as a general-purpose model for multi-image customization and effectively cover various types of reference customization, such as subject, style, character, and face, rather than being limited to single-subject preservation or single-image subject-driven tasks. Consequently, EasyRef avoids the need for elaborate image preprocessing, such as subject-specific masking, making the process simpler and applicable to a broader range of scenarios. 3. By leveraging the unique capabilities of MLLMs combined with data construction methods and efficient designs, we achieve strong performance on the general multi-image consistent generation task. Our work focuses on building a comprehensive pipeline, including data construction, optimization of architecture and training strategies, and benchmark creation, rather than proposing an entirely new architecture. Together, we believe these underscore the impact and utility of our work, providing practical insights and methodological advancements that benefit the broader research community. **Q2. Additional computational complexity introduced by using an MLLM to process multiple reference images and text prompts.** We analyzed the model's performance and inference efficiency when varying the number of reference images, as shown in Figure 8. The table below presents the inference latency. The "baseline" refers to the standard SDXL text-to-image generation without image references. Our results show that incorporating an MLLM with multiple reference images only increases latency by 10%-20%. | Number of references | baseline | 1 | 2 | 4 | 8 | | - | - | - | - | - | - | | Latency | 4.31s | 4.78s | 4.87s | 4.96s | 5.11s | **Q3. Customized generation when the multiple reference images contain the same two or more distinct targets.** We provide visualizations of customized generation using two subjects as references at https://github.com/anonymous-projectuser/image. As shown in the examples, the generated cat and dog exhibit high fidelity to their respective reference subjects. EasyRef leverages the instruction-following and multi-image comprehension capabilities of MLLM to flexibly identify and preserve consistent elements (e.g., two subjects) without the need for complex image preprocessing. **Q4. The impact of different LLMs and different diffusion models.** We compare several EasyRef variants on MRBench, as shown in the table below. First, we observe that using a stronger base MLLM or diffusion model improves performance. Second, larger MLLMs significantly increase model complexity and latency. Therefore, we select Qwen2-VL-2B and SDXL to achieve the best trade-off between performance and efficiency. | MLLM | Diffusion Model | CLIP-I | CLIP-T | DINO-I | Latency | | - | - | - | - | - | - | | Qwen2-VL-2B | SD1.5 | 0.821 | 0.705 | 0.607 | 3.84s | | Qwen2-VL-7B | SDXL | 0.836 | 0.714 | 0.620 | 6.02s | | Qwen2-VL-2B | SDXL | 0.833 | 0.709 | 0.614 | 4.96s |
Summary: This paper proposes to use a vision language model (VLM) to encode subjects in reference images, and convert them to soft tokens to personalize diffusion models. It can take objects, animals and human faces as subjects. UPDATE after author response: Per my request, the authors provided extra evaluation data on human face similarities compared with PuLID, InstantID and ConsistentID. It signals an impressive message that EasyRef outperform these SOTA face encoders. However, I'm unable to reproduce such results. Instead, my own runs of the EasyRef repo led to different conclusions: EasyRef performs poorly on human faces. I actually didn't require EasyRef to outperform SOTA methods on human faces, and just wished the authors to provide a reference data point. However, the authors didn't provide honest answers. In this regard, I decide to lower my rating to reject. Update 2: The authors provided sample images generated by their most recent model checkpoints. Seems the subject similarity indeed improves substantially compared to the (outdated) online demo. Therefore, I decide to revert my rating to a 3, as an appreciation of the tremendous efforts paid by the authors. Nonetheless, the compositionality/editability of the updated samples seems to be lower than the SOTA face encoders. Moreover, I still think the writing of this paper should be improved substantially, in particular, highlight novel architectural design choices and training recipies. Claims And Evidence: 1. The claim that "Conventional methods encode consistent elements across reference images through averaging or concatenation (Shi et al., 2024),... fail to capture desired visual elements through effective image interaction under explicit controls" seem to be inaccurate. The supporting example is "the IP-Adapter (Ye et al., 2023) generates an inconsistent image when the spatial locations of the target subject vary across the reference images" (Figure 2). What are the "Inconsistent Result" shown in figure 2? Different poses/views of the same person? Is this really inconsistent? Moreover, I don't think there's much issue with averaging or concatenation the subject embeddings. There's no spatial information encoded in subject embeddings, so the inconsistency across result images shouldn't be caused by averaging or concatenation. Methods And Evaluation Criteria: 1. The method seems to be a straightforward scaling-up of MOMA (ECCV 2024). MOMA also uses a VLM (referred to as a "MLLM" in the MOMA and this paper) to encode subject characteristics and map them to subject embeddings. Moreover, MOMA also has a "Diffusion Learning Stage" that corresponds to the "Alignment pretraining stage". There are some minor differences on the architecture though, but I think the biggest difference is on the scale of the dataset (MOMA used 282K vs. EasyRef used 13M, 46x scaling up!) 2. Taking the drastically different data scales into consideration, it's not an apple-to-apple comparison to directly compare MOMA with EasyRef. An EasyRef model trained with similar data sizes would be more supportive of its architectural advantages, if there are any. Regardless of the data scale difference, EasyRef has almost identical performance as MOMA on DreamBench. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. Many baseline methods are too weak and non-SOTA. It's not necessary or useful to include them. For example in Fig 5., "SD image variations", "SD unCLIP", "Kandinsky 2.1", "Open unCLIP" etc. 2. When comparing with facial images, the SOTA methods are InstantID, consistentID, PuLID, etc. IP-Adapter FaceID is a weak baseline. The authors only compared with InstantID in the appendix. However, face similarities are not evaluated, which is one of the most important metrics on face embeddings. Supplementary Material: Figs. 11-14. Relation To Broader Scientific Literature: It's a simple and straightforward extension of MOMA (ECCV 2024). The biggest difference is it uses 46x larger training data, and encodes human faces in addition to objects and animals. There are some minor differences (e.g. adding a self-attention layer at the end of VLM to mix prompt tokens with the subject embeddings), but I don't think they are essential difference and don't have much impact on the overall performance. Essential References Not Discussed: A few SOTA face encoders are not mentioned, including PuLID and ConsistentID. Other Strengths And Weaknesses: Since EasyRef is just a simple and straightforward extension of MOMA (ECCV 2024) with 46x larger training data, I don't feel it has sufficient technical novelties to warrant publication in ICML. Other Comments Or Suggestions: 1. Although this paper follows the terminology of MOMA (ECCV 2024) to call the VLM as "multimodal LLM" (MLLM), personally I think it's more accurate to refer to it as "Vision language model" (VLM), because the LLM part of the model (in this paper, Qwen2-VL-2B) is quite weak compared with the commonly used MLLMs. Moreover, both MOMA and EasyRef don't use complicated language capabilities of the model. Therefore I think calling them VLMs would be more appropriate. 2. The first 3 pages are written disproportionately. The abstract and a cover image takes one page and the introduction takes another, followed by some diffusion equations which everyone in the field is familiar with. These parts can be greatly condensed. The most important part, Section 2.2 and 2.3, are written briefly. The saved space should be used for more technical details and discussions. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 35bv, Thanks for your comments. We will address your concerns below. **Q1. What are the "Inconsistent Result" shown in figure 2? There's no spatial information encoded in subject embeddings.** 1. The essence of the "Inconsistent Result" lies in the insufficient understanding of multiple reference features. Traditional methods fail to distinguish whether these features correspond to multiple subjects or different representations of the same subject. As a result, features corresponding to different spatial positions of the same subject are uniformly decoded, leading to the repetition of multiple subjects in the output. 2. Some subject-driven methods utilize subject-specific segmentation masks to exclude background and spatial information from the reference images. However, using a mask limits the generalizability of the framework. For instance, it becomes unsuitable for style and position-aware reference. General-purpose methods, such as IP-Adapter, avoid relying on masks, which can lead to such inconsistencies. Similarly, EasyRef does not elaborately preprocess input images, as it prioritizes generality over being limited to subject-driven tasks. 3. Additionally, using masks increases the complexity of the pipeline by requiring an extra segmentation model. Furthermore, inaccurate masks can degrade performance. For users, manually annotating masks during inference adds to the overall cost and reduces usability. 4. In cases where multiple subjects occlude each other (e.g., as discussed in Q5 of the rebuttal for Reviewer sCdX), using masks or bounding boxes may result in inaccurate subject embeddings. **Q2. The novelty of this paper.** Please refer to the Q1 of rebuttal for Reviewer nUGh. **Q3. EasyRef uses significantly more training data compared to MOMA.** 1. MOMA is specifically designed for subject-driven generation task using a single reference image, whereas EasyRef is a general framework for multi-image consistent generation. EasyRef is capable of preserving various consistent elements, including subjects (e.g., common animals or objects), characters, styles, and human faces. Given their distinct design objectives and pipelines, directly comparing the data scales of the two approaches is neither fair nor meaningful. 2. Furthermore, our multi-image fine-tuning leverages approximately 350K clean subject-driven target images, which is comparable in scale to the data used by MOMA. **Q4. EasyRef has almost identical performance as MOMA on DreamBench.** 1. The DINO score is preferred by DreamBooth because its self-supervised training objective encourages the model to preserve fine-grained subject features. This makes it uniquely well-suited, compared to CLIP scores, for evaluating subject fidelity in generated images. 2. As shown in Table 3, EasyRef surpasses MOMA by 1.8% in DINO score. This performance gain is not trivial since MOMA improves the DINO score of IP-Adapter from 61.2% to 61.8% on the DreamBench. **Q5. It's not necessary to include baseline methods that are too weak and non-SOTA.** We will carefully exclude some weak baselines in the final version. **Q6. Comparing EasyRef with SOTA facial reference methods and face similarities are not evaluated.** We conducted qualitative comparisons (visualizations in https://github.com/anonymous-projectuser/image) showing EasyRef’s superior facial fidelity and quantitative evaluations using the face similarity metric adopted by PuLID on the MRBench: | Method | Face Sim | | ------------ | -------- | | PuLID | 0.632 | | ConsistentID | 0.652 | | InstantID | 0.657 | | EasyRef | 0.673 | EasyRef outperforms SOTA methods in identity preservation, and these comparisons will be emphasized in the revised manuscript. **Q7. This paper follows the terminology of MOMA to call the VLM as multimodal LLM (MLLM), it's more accurate to refer to it as VLM.** We use "MLLM" to align with its growing adoption in the community for architectures combining vision encoders, projectors, and LLMs (e.g., LLaVA). While terms like "LMM", "VLM" or "LVLM" are also commonly used, our choice reflects current conventions rather than solely following MOMA. **Q8. The LLM of the model is quite weak.** Please refer to the Q4 of rebuttal for Reviewer nUGh. **Q9. Both MOMA and EasyRef don't use complicated language capabilities.** Our synthetic captions (typically 1–3 sentences) are more detailed than conventional single-sentence prompts and require strong language comprehension capability to encode. Unlike MOMA, which only focuses on subject categories, EasyRef explicitly controls the extraction of consistent elements like face, style, and subject through instructions, better utilizing the model’s instruction-following capabilities. **Q10. The first 3 pages are written disproportionately.** We will streamline the opening sections and expand Sections 2.2 and 2.3 in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for providing extra evaluation data on human face similarities compared with PuLID, InstantID and ConsistentID. It signals an impressive message that EasyRef outperform these SOTA face encoders. However, I'm unable to reproduce such results. Instead, my own runs of the EasyRef repo led to different conclusions: EasyRef performs poorly on human faces. I've tested 3 subjects with 1~3 reference images each, and generated 8 images (downloadable below): https://limewire.com/d/2c7Il#hkVnlyEHhX We can see that the face similarities are quite low on Trump and Fischoff, and are only good (comparable to SOTA methods) on Naran. I actually didn't require EasyRef to outperform SOTA methods on human faces, and just wished the authors to provide a reference data point. However, the authors didn't provide honest answers. In this regard, I decide to lower my rating to reject. Another negative point of EasyRef is that, it consumes a very high amount of GPU RAM. I got OOM on 48G GPUs. In order to run the demo, I had to pay by myself to use a 96G cloud GPU. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the feedback. We would like to clarify the following points: **Q1** Facial preservation performance is bad. 1. The model you evaluated was trained in November 2024, whereas the model described in this ICML submission was trained in January 2025 (before the ICML submission deadline). We withdrew our paper from CVPR 2025 and improved our model based on the feedback received from the CVPR 2025 comments. 2. When comparing with other SOTA facial preservation specialists, we follow their methodology [1][2], which employs an internal face detector to square-crop the human face in the reference image. This ensures that the face occupies the majority of the image. Example reference images can be found here: https://github.com/anonymous-projectuser/image/blob/main/face.png. This technique enhances generation results. However, we noticed that your reference images were not processed using this technique. 3. We observed that you only used a single reference image for the FisChoff case. However, our model is specifically designed for multi-reference consistent generation. As noted in line 265 of the paper, the minimum group size during finetuning is 2. Using only a single reference image may lead to suboptimal results from the network. 4. We conducted generation based on your tested reference images (Trump and Nashi) and prompts, and the results are shown in: https://github.com/anonymous-projectuser/image/tree/main/examples. Our results demonstrate good identity preservation capabilities. 5. Additionally, we provide visualization comparisons here: https://github.com/anonymous-projectuser/image/blob/main/face.png. All reference images and generated outputs are included. The results of other models were produced using their official checkpoints or demos hosted on their respective Hugging Face Spaces. **Q2** Inference OOM issue. The Out of Memory (OOM) issue arises because 8 images are being generated per prompt. For comparison, generating 2 images requires 29 GB of memory, whereas the SDXL text-to-image pipeline uses 24 GB for the same generation setting. While our approach consumes slightly more memory, it introduces the capability for multi-image consistent generation. [1] InstantID: Zero-shot Identity-Preserving Generation in Seconds. (https://github.com/instantX-research/InstantID) [2] ConsistentID : Portrait Generation with Multimodal Fine-Grained Identity Preserving. (https://github.com/JackAILab/ConsistentID/blob/b42b725c49fcba83f57df63f9df610b703564447/pipline_StableDiffusionXL_ConsistentID.py#L74) **UPDATE:** Thank you for acknowledging our work and for raising the score. We greatly appreciate your thoughtful review and constructive feedback. In our revised manuscript, we will substantially improve the paper's clarity through more polished writing, enhanced visual figures, and a refined experimental section that addresses all reviewers' concerns. We sincerely appreciate your time and effort in reviewing our paper. Thank you once again!
Summary: In the area of image personalization, tuning-free methods fail to capture consistent visual elements across multiple references, and tuning based methods require finetuning for new groups. In response, to learn a effective and efficient subject representation across a group of references, this paper proposes EasyRef, an approach that leverages MLLM's ability to capture the visual features across multiple images. In the representation learning process, a reference aggregation strategy and a progressive training scheme have been designed. To evaluate the performance, a paired image benchmark, MRBench, is collected. Claims And Evidence: - The paper claims that the proposed representation learning method can accurately extract the consistent details from multiple references and requires no optimization or test-time finetuning. While this claim is mostly correct (proved by Tab. 2 and 3), it is still questionable whether it can capture intricate details of subjects having complex textures (e.g., objects from DreamBench). Also the paired-data collection method (Sec. A.3) does not guarantee that only the same instance will be grouped together. - The proposed method is very effective on capturing human faces and style, as demonstrated by the results in Fig. 5 and 6. Methods And Evaluation Criteria: - One of the main technical contributions, progressive training scheme is proved effective in Sec. 4.3; however, there should be more experiments ablating the design of the reference aggregation. - In Tab. 3, the improvement of EasyRef seems marginal over the other baselines on DreamBench. Theoretical Claims: The equations (5-9) look reasonable and correct to me. Experimental Designs Or Analyses: - (Minor) Fig. 2 shows the spatial misalignment issue of the embedding averaging, which provides valuable insights for similar works. However, this could be resolved by segmenting and cropping around the object. It would be more helpful if analysis/examples of other failure scenarios could be provided. - EasyRef should be compared with more strong baselines in image customization, such as CustomDiffusion, MS-Diffusion, etc. Now the visual results are mostly about styles and human faces, and objects with complex textures (e.g., the rigid objects from DreamBench) are missing. More visual comparisons should be shown. - The current pipeline of data clustering (Sec. A.3) is limited since it cannot ensure that only images of the same instance are grouped (e.g., images of the same category but different instance may also be included). This limitation will constrain the model's ability in detail preservation. Including object-centric video data may be one way to further improve identity preservation. Supplementary Material: Yes, mainly A.2 and A.3 (data curation). Relation To Broader Scientific Literature: In image personalization, most previous methods focus on improving the diffusion model itself. However, in this paper, we leverage MLLMs as feature extractors and introduce a carefully designed representation learning stage to capture visual features, offering new insights to the community. Essential References Not Discussed: No. Other Strengths And Weaknesses: - In Fig. 1, the task is not very clearly showed. It may lead to misunderstanding that the images on the right are generated by MLLMs. Maybe mentioned that Diffusion models are used for the customization. - (Strength) In the second stage, the vision encoder of the MLLM is also trained, which is a reasonable setup to improve the quality and capacity of the learned representations. Other Comments Or Suggestions: My concerns are mainly about the performance on identity preservation with complex objects; but the idea is solid and novel. Questions For Authors: - During the alignment pretraining, are the MLLM and Diffusion trained together or is the MLLM the only model pretrained? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer sCdX, Thanks for appreciating our work and your advice. We will address your concerns below. **Q1. Can EasyRef capture intricate details?** We provide visualizations on the DreamBench benchmark (https://github.com/anonymous-projectuser/image), demonstrating our method's ability to preserve intricate texture details. We also show a failure case in the last row. EasyRef fails to accurately preserve the textual details, primarily due to the inherent limitations in text rendering capabilities of the SDXL base model. **Q2. The collection method does not guarantee that only the same instance will be grouped together.** 1. We acknowledge that subjects belonging to the same category but not the same instance can be grouped together. However, this does not significantly impact subject-driven performance. Multi-image subject-driven generation requires that the main subjects across multiple reference images represent the same instance, and the generated outputs should contain the same instance. When multiple reference images depict subjects from the same category but not the same instance, using a target image with a subject from the same category is still a reasonable training approach. 2. To further enhance subject-driven performance, we have also constructed image groups using a stricter data filtering strategy (described in line 797 of our paper) by applying a higher DINO score threshold. Additionally, as discussed in line 250, we collected high-quality images and used them to train subject LoRA models. These LoRA models were then used for generating additional training data. 3. We randomly sampled 200 images from the training dataset and found that the proportion of such data is small (11 out of 200). **Q3. Ablating the design of the reference aggregation.** We have already conducted ablation studies on the aggregation designs, as shown in Table 5 and Table 6 of our paper. Additionally, we provide more ablations in the table below: | Design | CLIP-I | CLIP-T | DINO-I | | ------ | ------ | ------ | ------ | | EasyRef | 0.833 | 0.709 | 0.614 | | + causal attention | 0.826 | 0.708 | 0.610 | | + inserting tokens before the first LLM layer | 0.836 | 0.710 | 0.615 | | + new cross-attention layers | 0.816 | 0.712 | 0.601 | From these results, we observe (1) using bidirectional attention in the final layer enables better interaction among reference tokens, (2) inserting reference tokens before the first layer of the LLM increases training costs but does not yield significant performance gains, and (3) the adopted decoupled cross-attention mechanism is more effective and efficient compared to adding new cross-attention layers. **Q4. The improvement of EasyRef seems marginal on DreamBench.** Please refer to the Q4 of rebuttal for Reviewer 35bv. **Q5. It would be more helpful if other failure examples could be provided.** We have included two additional failure cases for analysis in https://github.com/anonymous-projectuser/image: 1. Attribute Confusion: When the regions of the dog and the chair in the reference images significantly overlap (e.g., the dog partially or fully covers the chair), simply averaging the features from these reference images can result in attribute confusion. For instance, the generated dog might inherit the chair's color, while the chair adopts the dog's color, leading to inconsistent generation results. 2. Subject Hallucination: When the dog appears in front of the chair in one reference image but is seated on the chair in another, the simple fusion method may be misled by the positional discrepancy. This can result in subject hallucination, where the diffusion model generates an additional dog-shaped object on the chair. **Q6. EasyRef should be compared with more strong baselines.** We will cite and discuss these state-of-the-art methods, such as CustomDiffusion [1], MS-Diffusion [2], and λ-ECLIPSE [3] in the final version. Their performance scores will be incorporated into Table 3. **Q7. In Fig. 1, the task is not very clearly showed.** We will adjust the teaser figure and modify its caption to make the task clear. **Q8. During the alignment pretraining, are the MLLM and Diffusion trained together or is the MLLM the only model pretrained?** We insert newly initialized reference tokens into the final layer of the LLM for aggregation. This makes the alignment pretraining process efficient, as we only train the final layer of the LLM, the reference tokens, the newly added cross-attention adapters in the U-Net, and the condition projector. The rest of the model remains frozen. [1] Multi-Concept Customization of Text-to-Image Diffusion [2] MS-Diffusion: Multi-subject Zero-shot Image Personalization with Layout Guidance [3] λ-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latent Space --- Rebuttal Comment 1.1: Comment: I appreciate all the additional experiments from the authors. My concerns have all been addressed and I have no more questions. I will maintain my current rating of accept.
null
null
null
null
null
null
Risk and cross validation in ridge regression with correlated samples
Accept (poster)
Summary: This is a theoretical paper that studies cross-validation in ridge regularized linear regression. The authors study an understudied regime in the literature on cross-validation: when the samples are *not* i.i.d. In this case, they show that the traditional generalized cross-validation (GCV) estimator does not correctly estimate the out-of-sample error. By theoretically analyzing the exact out of sample error, the authors derive a cross-validation estimator that *does* correctly estimate the out-of-sample error. The authors extend the results to the case of distribution shift (the test point at which one wishes to assess the out-of-sample error comes from a different distribution than the training data), and when the test point is correlated with the training data. Claims And Evidence: I want to be up front that I really struggled to understand the detailed results of the paper, as I don't have a background in random matrix theory / free probability. My one suggestion along these lines is that the paper could do a much better job of highlighting what the important results are, breaking them out into theorem/lemma/proposition environments, and then discussing the importance of these results. As-is, the paper reads a bit like a stream of formulae, without much connection back to results that matter to the broader machine learning community. I say this because I think there's some good potential impact to the ML community in this paper, but it's a little hard to dig out as-is. The only significant discussion is the final paragraph of the conclusion which comments on the applications of the theory. I think significantly adding to that discussion and moving more of the technical results to the appendix would strengthen the paper and better connect it to the ML community. Methods And Evaluation Criteria: Yes, this is a theoretical paper, and the authors use synthetic data to demonstrate their results. This seems appropriate to me. Theoretical Claims: I attempted to check the proofs of some of the claims, but I don't have the required mathematical background to do so. I am a little worried whether much of the broader machine learning community has the needed background, and so whether ICML is an appropriate venue for this paper. An example of this is in Appendix C.1, which is a "warm-up" proof of preliminary results.The proof uses terms like "insertions", "Wick contractions", "crossing diagrams" that I've never heard of and struggled to find definitions of online. And the proof proceeds by drawing pictures, rather than using algebra, in a way that I'm also not familiar with. Experimental Designs Or Analyses: The paper is light on experiments because it's a theoretical paper. The use of synthetic data to back up the theoretical results is a nice addition. Supplementary Material: Yes, I tried to check over some of the proofs, but as mentioned above, couldn't really make much progress there. Relation To Broader Scientific Literature: Cross-validation (CV) has a long history in statistics and machine learning. Despite its long history, its theoretical properties are complicated. In the last ~10 years, a growing body of work has started to understand the theory of CV. However, this body of work has primarily focused on models fit to i.i.d. samples, which is not realistic in practice. So, while the model studied by the authors is relatively simple (ridge regression), it starts to fill in an important gap in our understanding of a widely used procedure (CV). ... all of this is assuming I've correctly understood the results of the paper! Essential References Not Discussed: I don't believe there is anything major missing. Other Strengths And Weaknesses: 1. I think the theoretical development could be better organized to help readers. First, as mentioned above, I think the major results should be broken out into Proposition / Lemma / Theorem environments. This would help not only highlight the major results but also collect all of the necessary assumptions. E.g., which results require the independence of the noise $\epsilon$ from the covariates $x$? This is stated in one sentence in Appendix C.5 (~line 1202), but I'm not sure which results it applies to. 2. It wasn't always clear where to find the proof of various results in the paper. E.g., the first main result is derived in Section 3.2, which just states that some of the results are derived "in Appendix C." But Appendix C contains many of the proofs. Other Comments Or Suggestions: Nothing else Questions For Authors: No questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and forthright assessment of our paper. We appreciate your concerns regarding the clarity of our manuscript, and will revise it to make its impact clearer. - Following your suggestion, we will re-organize the main text and Appendices into Theorem-Proof style. This will make the paper easier to navigate for the reader, and as you noted will break the lengthy Appendix C into more manageable chunks. - Your concerns regarding the accessibility of the proof methods to the ICML audience are well-taken. Our terminology, and the particular diagrammatic approach to the proof, is taken from physics. The core idea is to compute $\mathbb{E} \hat{\Sigma} (\hat{\Sigma} + \lambda I)^{-1}$ by expanding order-by-order in $1/\lambda$ and then re-summing the resulting series of moments. Here, "Wick contractions'' are pairings of indices corresponding to non-vanishing joint moments of elements of the Gaussian random matrix. This terminology comes from the fact that Isserlis' theorem for Gaussian moments is known in physics as "Wick's theorem'' (Wick's theorem in physics is in fact far more general, but the terminology is applied also in the case of a Gaussian random vector). This approach is closely related to the diagrammatic formulation of the moment method in random matrix theory, where one builds a diagrammatic notation to represent the combinatorics of which elements in the expansion of an expression like $\mathbb{E}_{M} \mathrm{Tr}(M^n)$ give non-negligible contributions. "Crossing diagrams'' refer to a subset of these pairings of indices that give rise to non-zero moments but do not contribute at leading order to the final result. "Inserting'' a matrix corresponds to adding it to an ordered product of matrices at a particular location. We apologize for this rather heavy use of jargon; we will clarify each term in the updated version of our manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the response! The quick description here starts to help me understand some of the results. E.g. I am familiar with Isserlis' theorem, and I can kind of see how a diagramatic proof could be more helpful here over an algebraic one. I think adding more details about this proof technique / how Isserlis' theorem is being used will make the paper more self-contained for a general machine learning audience. Best, GGLG --- Reply to Comment 1.1.1: Comment: Thank you again for your valuable feedback! To briefly respond to the question about the application of Isserlis' theorem implicit in your comment: In the proof, one must (roughly speaking) study expectations of the form $\mathbb{E}[ (X^{\top} X)^{n} ]$ for some integer $n$, where $X$ is a $T \times N$ Gaussian matrix. This can be expanded as a sum over joint moment of elements of the Gaussian matrix $X$. For example, for $n=2$ we have $$\mathbb{E}[ (X^{\top} X)^{2} ]\_{ij} = \sum\_{k,l,m} \mathbb{E}[X\_{k i} X\_{k l} X\_{m l} X\_{m j}].$$ Then, Isserlis' theorem can be applied to expand this joint moment as $$\mathbb{E}[X\_{k i} X\_{k l} X\_{m l} X\_{m j}] = K\_{kk} \Sigma\_{il} K\_{mm} \Sigma\_{lj} + K\_{km} \Sigma\_{il} K\_{km} \Sigma\_{l j} + K\_{k m} \Sigma\_{ij} K\_{k m} \Sigma\_{ll},$$ which leads to $$\mathbb{E}[ (X^{\top} X)^{2} ]\_{ij} = \text{Tr}(K)^2 (\Sigma^2)\_{ij} + \text{Tr}(K^2) (\Sigma^2)\_{ij} + \text{Tr}(K^2) \text{Tr}(\Sigma) \Sigma\_{ij}.$$ In the high-dimensional limit $N,T\to\infty$ with $N/T = \Theta(1)$, we assume that all traces are $\Theta(N)$: $\text{Tr}(\Sigma) \sim \text{Tr}(K) \sim \text{Tr}(\Sigma^2) \sim \text{Tr}(K^2) \sim \Theta(N)$. Therefore, the first and third terms are of the same order, while the $\text{Tr}(K^2) (\Sigma^2)\_{ij}$ term is sub-leading. This is a simple example, as one must figure out how to study moments with arbitrary $n$, but it illustrates the key components of the proof. The diagrammatic notation succinctly represents these types of sums.
Summary: The paper proposes CorrGCV, a modified version of the more well known generalized cross validation estimator (GCV) to estimate out-of-sample risk from in-sample data. Claims And Evidence: The theoretical derivations are sound. Methods And Evaluation Criteria: The theoretical derivations are sound, the experiments are convincing. Theoretical Claims: The theoretical derivations are sound. Experimental Designs Or Analyses: Simulations are sound, however the paper lacks real world application analyses. Supplementary Material: Proofs in the supplementary material are sound. Relation To Broader Scientific Literature: The paper proposes CorrGCV, a modified version of the more well known generalized cross validation estimator (GCV) to estimate out-of-sample risk from in-sample data. Essential References Not Discussed: Essential references have been mentioned. Other Strengths And Weaknesses: * Modified CorrGCV is very interesting, and an improvement * However, it is unclear how sensitive the results will breakdown if assumptions are violated * Simulations are sound, however the paper lacks real world application analyses. Other Comments Or Suggestions: n/a Questions For Authors: * However, it is unclear how sensitive the results will breakdown if assumptions are violated: assumptions on correlation structure are too strcit * Simulations are sound, however the paper lacks real world application analyses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are glad that the referee found our theoretical results "sound'', and our experiments "convincing.'' We appreciate the referee's concerns regarding demonstration of real-world applications, but given that our paper is theoretical in nature we are surprised by the strongly negative assessment. In particular, it is the first theoretical analysis of high-dimensional ridge regression with correlated samples, and thus contributes to the extensive theoretical literature on this simple and fundamental learning algorithm. We would like to elaborate on why we believe the assumption of Gaussian covariates (under which our theoretical results were derived) is not highly restrictive. This assumption is motivated by the extensive literature on Gaussian universality in ridge regression: in the high-dimensional regime of interest, the risk obtained for some non-Gaussian dataset is asymptotically equal to that for Gaussian data of matched covariance (see, for instance, Hu and Lu https://arxiv.org/abs/2009.07669, Dubova et al https://arxiv.org/abs/2009.07669, or Misiakiewicz and Saeed https://arxiv.org/abs/2403.08938, or the many references cited within these works). Indeed, contemporaneous work from Luo et al (https://arxiv.org/abs/2406.11666) shows universality for data where features are uncorrelated but samples are correlated. And, since the submission of our manuscript, work by Moniri and Hassani (https://arxiv.org/abs/2412.03702) has shown universality for covariates of the form $X \overset{d}{=} K^{1/2} Z \Sigma^{1/2}$ where $Z$ has general sub-Gaussian i.i.d. entries; this generalizes our model in which we assumed the entries of $Z$ to be Gaussian. Therefore, we do not believe the assumptions under which are theoretical results are derived to be overly restrictive. We mentioned this motivation in our submitted manuscript, and will endeavor to make it more explicit in the revised version of our work. In light of this, we respectfully request the referee to reconsider their assessment. While a comprehensive experimental validation of the CorrGCV estimator with real data will be important to establish its broad applicability, to do this justice would require a manuscript-length study in its own right. Therefore, we believe that the theoretical contributions of this work can stand alone. --- Rebuttal Comment 1.1: Comment: By assumptions on correlation structures are too unrealistic, and bounds are only asymptotic. --- Reply to Comment 1.1.1: Comment: We thank the referee for their continued engagement. We would like to clarify two points raised in their brief rebuttal comment: First, we would like to emphasize that our results are not bounds. Rather, they are sharp asymptotics in the sense that they precisely give the limiting behavior of the risk. As our experiments show, they are predictive even for relatively modest dimension and number of training examples. Moreover, though doing so would be outside the scope of the present paper, the finite-size corrections could in principle be entirely quantitatively controlled. One way to do so is to compute a series of corrections in $1/N$ and $1/T$, as discussed in the Appendices. Another way would be to quantitatively bound the error terms. In particular, we conjecture that it should be possible to prove explicit high-probability multiplicative error bounds roughly of the form $R = [1+O(T^{-1/2})] \mathcal{R}$ where $R$ is the out-of-sample risk and $\mathcal{R}$ is the asymptotic deterministic equivalent as derived in our work. For the case of independent training points, dimension-free bounds of this form have been obtained by Cheng and Montanari in https://arxiv.org/abs/2210.08571, and by Misiakiewicz and Saeed in https://arxiv.org/abs/2403.08938. These proofs are lengthy and rather technical, and were developed following detailed study of the sharp asymptotics previously obtained for independent data. In sum, we do not view the asymptotic nature of our results to be a strong limitation. Second, we would like to emphasize again that though our data model $X \overset{d}{=} K^{1/2} Z \Sigma^{1/2}$ does impose restrictions (in particular that the correlations across samples and across features factorize), previous works have obtained sharp asymptotics only for diagonal $K$, i.e., for independent training points. Therefore, we make a much weaker assumption on the correlation structure of the data than has been standard in the study of ridge regression. We respectfully request that you clarify why you do not view this generalization to be a sufficient advance. For instance, do you have a particular data generating model in mind?
Summary: This paper investigates the problem of high-dimensional ridge regression with correlated data, which is a common feature in time series. Using methods from RMT and free probability, the authors derive sharp asymptotics for the in- and out-of-sample risk, showing that the standard cross-validation estimator fails for non-iid data, as it is asymptotically biased. They introduce a corrected GCV estimator that is unbiased and accurately predicts the out-of-sample risk, extending their results to the setting where test points are correlated with training points. Finally, they show that correlations can smooth the double-descent peak. Claims And Evidence: All claims are supported by convincing evidence and limitations are clearly stated. Methods And Evaluation Criteria: The methods used are well suited to the problem considered, as they are frequently employed in the study of ridge regression problems. Theoretical Claims: I have no issues to discuss. Experimental Designs Or Analyses: I have no issues to discuss. Supplementary Material: I have reviewed the supplementary materials in their entirety, focusing on sections A-D and J-K. I have no issues to discuss. Relation To Broader Scientific Literature: This work is related to several studies in the literature on ridge regression, extending for the first time previous findings to anisotropic data that exhibit correlations between samples (and additionally to correlations between test and training data). Most previous works have considered iid data or correlations in the label noise. Besides providing a corrected GCV estimator, the authors analyze the standard estimator as well as other estimators proposed in the literature, deriving sharp asymptotics and demonstrating their failure in high dimensions. Essential References Not Discussed: I am not aware of any essential references that have been omitted. Other Strengths And Weaknesses: I have not identified any major weaknesses. The presentation is clear, and the authors provide detailed explanations on algorithmic implementations, experiments, and the background knowledge necessary to understand their derivations. Even with its limitations, which are appropriately discussed, this paper provides original and interesting theoretical results closely related to time series applications. Other Comments Or Suggestions: -In several plots, the colors for CorrGCV and NaiveGCV_1 are very similar. I suggest changing one of them to improve clarity. -Even if clearly distinguishable from the context, I believe that the notations ${\rm df}^2_{A}$ and ${\rm df}^2_{\Sigma\Sigma'}$ (etc.) may be misleading, since ${\rm df}^2_{\Sigma\Sigma'} \neq {\rm df}^2_{A = \Sigma\Sigma'}$. -Typo on line 436: the reference authors are repeated twice. Questions For Authors: I do not have additional important questions. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading of our manuscript, and are gratified by their strongly positive assessment. We will update the paper to address all three of their comments: - We will change the color used for the CorrGCV to to distinguish it from the NaiveGCV\_1. - We will adopt the notation $\mathrm{df}^{2}\_{\Sigma, \Sigma'}$ to distinguish it from $\mathrm{df}^{2}\_{A=\Sigma\Sigma'}$. We would appreciate the reviewer's feedback on whether this notation is sufficiently distinct, or if further changes would be helpful. - We have fixed the typo on Line 436, thank you for catching this. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. The proposed notation seems sufficiently clear to me. --- Reply to Comment 1.1.1: Comment: We are glad that these changes address your concerns. Thanks again for your careful assessment of our submission!
Summary: This paper employs novel techniques from random matrix theory and free probability to analyze the asymptotic properties of the generalized cross-validation (GCV) as an empirical risk estimator for high-dimensional ridge regression, particularly in settings with cross-sectional and temporal correlations in both the covariates and the label errors. By leveraging insights from the in-sample and out-of-sample risks, the authors develop a new consistent estimator, CorrGCV, for the out-of-sample risk when the label noise exhibits the same correlation structure as the covariates. The performance of CorrGCV is further examined through a series of numerical experiments. Overall, the paper is well-written and was a pleasure to read. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: For the most part, the experimental designs make sense; however, I would like to see a bit more. For example, the purpose of cross-validation is essentially to choose the optimal tuning parameter $\lambda$ that minimizes the out-of-sample risk. For this approach to be effective, CorrGCV must accurately approximate the risk over a wide range of $\lambda$ values. Therefore, it would be beneficial to verify this behavior, rather than relying solely on the current setting with a fixed $\lambda = 0.001$. Supplementary Material: I have reviewed the proof; however, I must admit that I am not very familiar with the techniques employed. Consequently, I will rely on other reviewers for a detailed evaluation of the technical aspects of the proof. Relation To Broader Scientific Literature: I believe that the results of the paper enhance our understanding of the behavior of the out-of-sample risk when correlations are present in the data. Essential References Not Discussed: 1. Line 63-66, the authors state that "However, most of these works focus on correlations only in the label noise, and none of them show that their estimators are asymptotically exact in high dimensions. " This is not entire accurate. There has been some work in the smoothing spline literature (which is essentially a high-dimensional Kernel ridge regression) on how to approximate the out-of-sample risk consistently when there are correlations. For example, [1]-[3]. 2. The asymptotic properties of the GCV for independent data have also been investigated in depth in \cite{ref5} and \cite{ref6} for kernel ridge regression, which I believe are also relevant. References: [1]. Wang, Y. (1998). Smoothing spline models with correlated random errors. Journal of the American Statistical Association, 93(441), 341-348. [2]. Xu, G., & Huang, J. Z. (2012). Asymptotic optimality and efficient computation of the leave-subject-out cross-validation. [3]. Gu, C., & Ma, P. (2005). Optimal smoothing in nonparametric mixed-effect models. [5]. Xu, G., Shang, Z., & Cheng, G. (2018). Optimal tuning for divide-and-conquer kernel ridge regression with massive data. In International Conference on Machine Learning (pp. 5483-5491). PMLR. [6]. Xu, G., Shang, Z., & Cheng, G. (2019). Distributed generalized cross-validation for divide-and-conquer kernel ridge regression and its asymptotic optimality. Journal of computational and graphical statistics, 28(4), 891-908. Other Strengths And Weaknesses: Strength: The analysis is thorough and informative, and the paper is quite well written. Weakness: I am not sure how widely applicable the proposed CorrGCV criterion is in practice, since it requires that the label noise has the same correlation structure as the covariates ($K=K'$). Such a requirement does not seem to be very common in practical applications. Other Comments Or Suggestions: I believe that the motivation for the model considered in (1) needs to be significantly strengthened. A practical approach would be to provide concrete examples that demonstrate the usefulness of model (1) in real-world applications. In particular, since the proposed CorrGCV is specifically designed for the case where $K=K'$, it would be beneficial to include examples that illustrate why this assumption is both common and useful in practice. Questions For Authors: 1. In the last paragraph of Section 1, the authors discuss the weighted least squares loss with a weight matrix $M$. It is stated that "this is equivalent to considering an isotropic loss under the mapping ... and as a result, our asymptotics apply immediately to general choices of $M$." I am afraid that this is not so simple. Under such a mapping, one would have to change the definition of the response vector $\mathbf{y}$, and consequently, the definition of the risk would become dependent on $M$. Therefore, the theory would be applied to a different in-sample and out-of-sample risk than that originally defined for $\mathbf{y}$. Could you please clarify this point? 2. Can you provide more details on the derivation of the equations in (5)? As someone who is not familiar with free probability theory, I find it difficult to understand why these equations hold. Is this due to some special properties of Wishart matrices? 3. What are the conditions on $K$ and $\Sigma$ for equation (6) to hold? There is no requirement on the eigenvalues of $K$ and $\Sigma$? 4. In my opinion, the most interesting case is discussed in Section 3.3, as I believe it is very common in practice. However, the conclusion from this section appears to be that "there is nothing we can do to approximate the out-of-sample risk." Is my understanding correct? If so, this outcome is quite disappointing and significantly limits the practical value of this work, as assuming $K=K'$ is rather restrictive. Could you explore the possibility of developing an alternative cross-validation method to approximate the out-of-sample risk based on the relationship between $R_g$ and $\hat{R}$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful assessment of our manuscript. First, with regards to your questions and concerns: 1. You are correct in stating that with the introduction of a weighting $M$ the definition of the effective vector of measured responses changes. Our observation here is simply that under our Gaussian statistical assumption on the matrix of observed covariates $X \overset{d}{=} K^{1/2} Z \Sigma^{1/2}$ and noise $\epsilon \overset{d}{=} (K')^{1/2} \eta$ (for $Z$ and $\eta$ having i.i.d. Gaussian elements), the minimizer of the weighted risk is equal in distribution to the minimizer of an unweighted risk with $K$ and $K'$ replaced by $M^{1/2} K M^{1/2}$ and $M^{1/2} K' M^{1/2}$, respectively. Therefore, one can apply the same results to obtain the asymptotics of the out-of-sample risk as defined in Line 147-148, which does not depend on the choice of weighting. We will revise the discussion starting on Line 157, and Appendix H, to clarify this issue. 2. For a discussion of the derivation of (5), please see our response to Reviewer GGLG, who voiced similar concerns about the accessibility of the proof. In brief, the multiplicative property of the $S$-transform is one of the fundamental results of free probability. In this case it follows from studying the asymptotics of the resolvent of a Wishart matrix, but this multiplicative property extends to more general products of ``free'' random matrices (see Appendix B or the textbook of Potters and Bouchaud for a definition). 3. The basic assumption we need to make on $K$ and $\Sigma$ is that their empirical eigenvalue distributions tend to well-defined limiting measures as $T$ and $N$ tend to infinity, respectively. The support of these measures should be bounded strictly away from zero, and spectral moments of all orders (e.g., $\frac{1}{T} \mathrm{Tr}(K^{n})$) should be bounded. In our revised manuscript, we will make this clearer by stating these conditions as an explicit Assumption (as part of re-organization into Theorem-Proof style, as requested by Reviewer GGLG). 4. Our focus on the case $K=K'$ is fundamentally motivated by our theoretical result that in that case the asymptotic risks are proportional. This is the same proportionality property that enables the definition of the (easily-computable) classic GCV as an asymptotically unbiased estimator of the out-of-sample risk for independent training samples. Our analysis shows that if $K \neq K'$ then the training and test risks are not in general directly proportional, which means that an asymptotically un-biased estimator should not have the simple form of the GCV. This does not, however, mean that no estimator of the out-of-sample risk exists; it just must be something more than a multiplicative correction to the training risk. One practically-relevant setting in which one would expect to have $K=K'$ is if the noise arises through components of the target function that cannot be learned through linear regression. Namely, nonlinear components will act as effective noise, as is discussed in the literature on Gaussian universality cited in our response to Reviewer XMHo. As mentioned there, we believe that a comprehensive examination of practical applications of the CorrGCV estimator is beyond the scope of this paper, as it is primarily theoretical in its aims. We will expand our discussion of Gaussian universality in our revised manuscript. In addition to the changes reflected above, we will make the following additions to our manuscript: - We will add a figure showing a sweep over values of $\lambda$ to show that the CorrGCV is broadly effective. - We will add citations to the five references you suggested; we regret that we missed these relevant works in preparing our manuscript. --- Rebuttal Comment 1.1: Comment: I would like thank the authors for addressing my questions. For 1, what I meant is that the weighted risk is different from the unweighted risk function, and depends on the chosen weight matrix. The purpose of weighted least square is to reduce the unweighted risk by applying an appropriate weight matrix. If you switch the goal to minimize the weighted risk, the approach is less meaningful. For 4, I am still looking for a concrete example where $K=K'$ is a necessary assumption. --- Reply to Comment 1.1.1: Comment: Thank you for following up; we would like to clarify our responses to the two points mentioned. We regret that our description of how the weighted risk is used was not clear. To be concrete, we consider a case in which the estimator $\hat{w}\_{M} = \text{argmin}\_{w} \hat{R}\_{M}(w)$ is defined by minimizing a weighted in-sample risk $\hat{R}\_{M}(w) = (Xw-y)^{\top} M (Xw-y)$ for some weighting matrix $M$. Then, we consider the out-of-sample risk $R(\hat{w}\_{M}) = \mathbb{E}\_{(x,y)}[ (\hat{w}\_{M}^{\top} x - y)^{2} ]$ for that estimator. Here, the distribution of the test sample is given by $x \sim \mathcal{N}(0,\Sigma)$ and $y\,|\, x \sim \mathcal{N}(\bar{w}^{\top} x, \sigma_{\epsilon}^2)$. The definition of the out-of-sample risk---which is, as the reviewer notes, what we fundamentally want to minimize---is not affected by the choice of $M$. What we compute is the asymptotics of $R(\hat{w}\_{M})$, i.e., the out-of-sample risk for the weighted estimator. We hope that this clarification addresses your concern; we will revise our manuscript to make this point clear. We emphasize again that the condition $K = K'$ is a *result* of our asymptotic analysis, not an assumption, and our theoretical results include the case $K \neq K'$. We do not quite follow what the reviewer seeks in terms of a case where $K = K'$ is a *necessary* assumption. Our point is that $K = K'$ is sufficient for there to exist an asymptotically unbiased non-omniscient estimator of the out-of-sample risk that can be written as a multiplicative correction to the in-sample risk. With regards to particular examples where $K = K'$, we can of course identify generative models where this is naturally the case (see below), but cannot provide a complete taxonomy. In regards to specific generative models in which one has $K = K'$: - Suppose that one observes only a subset of the relevant features, and fits a mis-specified model. Then, as discussed at length in Section 5 of Hastie et al, https://arxiv.org/abs/1903.08560, one will incur additional out-of-sample risk due to mis-specification bias, as well as mis-specification variance that acts identically to additive noise on the targets. This case is of clear practical relevance, as in many settings one cannot observe all relevant covariates. - Suppose that the target function has un-learnable modes. That is, assume that there are components of the target function that cannot be learned with ridge regression even with infinite training data, which would correspond to vanishingly small eigenvalues of $\Sigma$ (see, for instance, Canatar et al., https://www.nature.com/articles/s41467-021-23103-1, Xiao et al. https://iopscience.iop.org/article/10.1088/1742-5468/ad01b7, or Atanasov et al., https://arxiv.org/abs/2405.00592 for related discussions). This is closely related to the presence of un-observed features. Then, these components will act as an effective noise with covariance $K$. We will mention these and other examples in the updated version of our manuscript. We would like to re-iterate that a comprehensive empirical study of how well the proposed CorrGCV estimator performs when applied to real data requires a manuscript in its own right. We acknowledge that this is a limitation of the present work, but emphasize that our goal here is to lay theoretical foundations, as past works have not derived precise asymptotics for high-dimensional ridge regression with correlated samples.
null
null
null
null
null
null
WATCH: Adaptive Monitoring for AI Deployments via Weighted-Conformal Martingales
Accept (poster)
Summary: The paper proposes an extension of conformal test martingales (CTMs) by the use of weighted conformal p-values rather than standard conformal p-values. The authors argue that such weighted p-values permit testing for more general hypotheses beyond simple (online) exchangeability, and propose a hypothesis that helps disambiguate between covariate and concept/label shifts. A density ratio estimator is employed and the weighted p-values are shown to experimentally adapt to small covariate shifts, whereas stronger covariate and concept shifts are detected. The method is compared to standard CTMs and one type of sequential e-process-based testing on tabular and image data. Claims And Evidence: The approach is motivated by three goals listed in the introduction and in Figure 1, namely adaptation to small covariate shifts, detection of stronger shifts, and disambiguation on shift origin. From what I can see, all of these goals are addressed to some extent by the method, and are not overclaiming. I was nonetheless bothered by the claim that "Our empirical results show that WATCH substantially outperforms SOTA in these areas" which I find to be overly strong, given the actual reported results (e.g. Table 1 and 2), as well as lack of baseline comparisons and some of the issues I had with the experimental designs in general (see below). Methods And Evaluation Criteria: The method (WATCH/WCTM) leverages a single test martingale design adapted from prior work (fixed betting function design, nonconformity scoring function, density ratio estimator, Shiryaev-Roberts supplement, predictor) and is compared against standard CTMs on three tabular datasets with simulated benign shifts (Fig. 2), and against CTMs and a sequential e-process-based testing procedure (not clearly specified which one from that paper) on MNIST and CIFAR-10 with corruptions. Methods are compared in terms of adaptability (for the benign shifts), detection delay, alarm rates, and runtime. Overall the considered criteria seem reasonable, although I don't necessarily think that runtime is a strong metric for this type of online monitoring given they're all relatively fast, and I believe more focus needs to be placed on the disambiguation in terms of (false) alarm rates and maintaining or breakdown of coverage guarantees coinciding with raised alarms. Theoretical Claims: The theoretical results are relatively straightforward and rely on key arguments from prior work, with an adaptation to weighted p-values. Thm. 3.1 seems most novel by extending the exchangeability testing argument of conformal p-values to weighted variants. Thm. 3.2 and 3.3. are exclusively reliant on existing arguments (e.g. Ville's Inequality applied to standard CTMs), and simply a direct invocation thereof. In fact, I would suggest to soften the notion of novelty associated with a "Theorem" here -- why not call them Lemmas instead? On another note, it would be nice to add intuition to the use of Thm. 3.3. Am I to read this as a softening of the high-probability false alarm guarantee given in 3.2 in terms of an expectation term? I am also unsure because it seems to read as a lower bound on the detection delay, but this does not say anything about the "good" performance of the method (i.e. akin to worst-case or upper bound?) Experimental Designs Or Analyses: My key issues with this paper are in the experimental section, which reads unclearly and omits all specifics on experimental design, metrics, baselines etc. Instead very similar-looking plots are repeatedly shown (but at times anecdotal, e.g. Fig. 3) to double-down on the difference between WCTM and standard CTM, whereas the comparison to sequential testing ends up allocated to two hard-to-read results tables with no explanations on the considered metrics and their calculation. Some of the questions I had reading this section where: - The X-CTM is introduced very late and in a rushed manner, even though it is fundamental to the method. Is the same design used as for WCTM? How would the WCTM perform in benign covariate shifts without the additional information given by the X-CTM, just on its own? - I didn't follow Eq. 17 on improving the martingale's adaptivity -- is this meant to symbolize a truncation of the past observations? - How exactly are the density ratio weights computed, and why was this particular estimator chosen? Ablations/comparisons to other options? How can we trust that this estimator is doing a good job, given its crucial role? - How exactly are the interval widths shrunk in Fig. 2? Is this exclusively due to the obtained weighting, or is there some kind of coverage-based correction? - Ablations/comparisons on the role of the betting function, which is known to be very important in dictating the growth of the test martingale? - The classification into benign and severe shifts is somewhat ad-hoc -- a shift is essentially called severe after the fact if the WCTM does not manage to adapt appropriately. E.g. in Fig. 3c the associated shift is a Level-3 corruption on 60% of samples -- who is to say that this is not severe? Rather than rooting those statements in their own method's ability, it seems more principled to root them in some kind of target risk score (e.g. conformal coverage or miscoverage risk reaching some threshold). I see something along those lines in Fig. 2 (left column) where coverage seems satisfied even under benign shift (also for CTMs). Can we see coverage results for sec 4.2 and Table 2? - Standard CTMs are equipped with the same Type-I error control or false alarm guarantees as the proposed method since they both rely on test martingales. So, I was surprised to see that Table 2 records such high false alarm rates. How can I explain this? Is this because in fact they are designed for the simple exchangeability hypothesis, whereas here you consider the one from Eq. 13, and thus CTMs do not have that guarantee? Then the comparison seems somewhat unfair or not clearly disambiguated. Associatedly, what kind of rejection thresholds are used (these are no-where discussed). Can we see a clear discussion on nominal vs. empirical guarantees and associated empirical false alarm rates for sec 4.2 and 4.3? - How come that CTMs record higher detection delays than WCTMs, given that they (1) do not try to adapt to shift and (2) are repeatedly shown to be more sensitive and reject faster in the plots? The statements made in L410-413 seem contradictory to that intuition. - CTMs can also be used without a rejection threshold, merely as a reflection of evidence for occured changepoints that does not disambiguate between severity of shifts. If this was coupled with the tracking of some loss or risk measure to assess severity such that decisions on raising alarms can be taken, what benefit is obtained by WCTM? - Why are there no baseline comparisons to (1) standard weighted CP methods which are clearly related, e.g. [1,2,3], (2) prior conformal test martingale methods, e.g. [4,5,6], and (3) related sequential testing and changepoint detectors, e.g. [7,8,9], including methods that similarly try to adapt to shifts online? These all seem very relevant, and a meaningful subset thereof would help clarify the benefits of WCTM by highlighting the limitations to each method (e.g. CTM vs. WCP) and how the combination gives new benefits. [1] Tibshirani, Ryan J., et al. "Conformal prediction under covariate shift." Advances in neural information processing systems 32 (2019). [2] Barber, Rina Foygel, et al. "Conformal prediction beyond exchangeability." The Annals of Statistics 51.2 (2023): 816-845. [3] Podkopaev, Aleksandr, and Aaditya Ramdas. "Distribution-free uncertainty quantification for classification under label shift." Uncertainty in artificial intelligence. PMLR, 2021. [4] Volkhonskiy, Denis, et al. "Inductive conformal martingales for change-point detection." Conformal and Probabilistic Prediction and Applications. PMLR, 2017. [5] Eliades, Charalambos, and Harris Papadopoulos. "A conformal martingales ensemble approach for addressing concept drift." Conformal and Probabilistic Prediction with Applications. PMLR, 2023. [6] Eliades, Charalambos, and Harris Papadopoulos. "A betting function for addressing concept drift with conformal martingales." Conformal and Probabilistic Prediction with Applications. PMLR, 2022. [7] Bar, Yarin, Shalev Shaer, and Yaniv Romano. "Protected test-time adaptation via online entropy matching: A betting approach." Advances in Neural Information Processing Systems 37 (2024): 85467-85499. [8] Shekhar, Shubhanshu, and Aaditya Ramdas. "Sequential changepoint detection via backward confidence sequences." International Conference on Machine Learning. PMLR, 2023. [9] Shin, Jaehyeok, Aaditya Ramdas, and Alessandro Rinaldo. "E-detectors: A Nonparametric Framework for Sequential Change Detection." The New England Journal of Statistics in Data Science 2.2 (2023): 229-260. Supplementary Material: I have looked at the attached appendix Relation To Broader Scientific Literature: Even though the general exposition and background in the paper is very nice, I believe the method is not clearly put into context to other related approaches and existing methods. I've provided some references above. One could also think about discussing other ways of combining conformal p-values (beyond within a test martingale framework), e.g. see [1] [1] Vovk, Vladimir, Bin Wang, and Ruodu Wang. "Admissible ways of merging p-values under arbitrary dependence." The Annals of Statistics 50.1 (2022): 351-375. Essential References Not Discussed: I've given what I consider relevant references missing or not clearly discussed above. Other Strengths And Weaknesses: - I liked the motivation in the introduction, the clarity of Fig. 1 to help the reader understand the general problem, and the nice background/exposition in section 2 at the appropriate level of depth. I also enjoyed the connection to the permutation interpretation of weighted p-values in sec 3.1 and 3.2. - If Eq. 13 ends up being the single hypothesis considered and experimentally examined even though the approach is motivated from a more general hypothesis class perspective $H_0(f)$, then it seems misleading to call sec 3.3. an "Example" unless more are given. Perhaps it should be emphasized that this is the primary testing objective. Other Comments Or Suggestions: - The paper could use a substantial proof-read since there are quite a few typos (e.g. some noted are L88, L186, L171, L301, L302). - Table 1 is confusing to read since there is a lot of bolding. Perhaps one can omit the blue bolding of the own method. - The plots in general are too small, and figure legends etc. are only readable if zoomed in. I would suggest a proper overhaul for readability there. - A clear outline of contributions (e.g. bullet points) in the introduction would be nice to have - In L140 what is $[0,1]^*$? - In Eq. 7 the $\forall t$ seems incorrect here, rather its for a given fixed $t$ Questions For Authors: - Please see questions in the experiments section that I would like to be addressed/clarified. - Can you clarify the interpretation given in L258-L264 for Eq. 13? To me this seems like a test for covariate shift and not the approximation quality of $\hat{f}^T(X) / f(X)$? Also, does the estimation of only $\hat{f}^T(X)$ still subsume a known $f(X)$, or is it supposed to symbolize an estimated *ratio* rather than estimated shifted covariate distribution? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your time, interest, & detailed feedback! We refer to the this anonymous link for supplemental figs: https://sites.google.com/view/authorresponse/home **Claims and Evidence:** - *SOTA statement:* We have removed the quoted statement, & revised it to refer specifically to how our methods compare to the evaluated baselines (regarding adaptation & detection speed). - See “Suggested baselines/Relation to Lit” section. **Methods And Evaluation Criteria:** See “**Spillover from 7BRr**” in response to **oGBa** for: - *Specific baseline e-process* - *Runtime metric* - *Coverage vs alarms* **Theoretical Claims:** - *Thm 3.2 & 3.3:* We are happy to rename “Thm 3.2” to “Proposition 3.2,” and same for 3.3 (we think proposition is more accurate than lemma). - *Intuition for Thm 3.3:* The guarantee is controlling the average-run length (ARL) *under the null hypothesis*, meaning how long the “scheduled” monitoring procedure can be expected to run without a false alarm before it needs to be reset. What you describe about detection delay would be a guarantee *under an alternative hypothesis;* we do not report such guarantees. **Experimental Designs Or Analyses:** - *X-CTM design & ablation of WCTM without X-CTM:* The $X$-CTM uses the same design (ie, same betting function) as the WCTM, but with a nearest-neighbor-distance score. Without a XCTM, a WCTM has 2 main options: (1) no adaptation, which reduces to standard CTM; or, (2) assume that the changepoint occurs at deployment time. See S2.1 at the link for an ablation study on (2). - *Eq (17):* If every test point is added to the calibration data (as in standard CTMs), then the weights become intractable to compute (Prinster et al. 2024); but, can avoid this by, at some point $t_{ad}$ (given by $X$-CTM), treating the cal data as fixed (no longer adding recent test points to it). (Fixing the cal set results in the WCTM testing a hypothesis that is *conditioned* on the cal set, vs marginal over it; we will add an appendix section to elaborate.) - *Density-ratio estimator:* The density-ratio weights were estimated via probabilistic classification, as described in Tibshirani et al (2019), Eq (9). Tabular experiments used sklearn’s LogisticRegression; image experiments used a prefit MLP with 2 hidden ReLU layers, fit on 5K pre-change and 5K post-change points. An example ablation study for the effect estimator misspecification is provided in S2.2 at the link provided. - *Widths shrunk, betting ablations & benign/severe shifts:* **See response #4 to b4ce.** - *Table 2 questions:* We meant to write “Unnecessary Alarm” to refer to alarms raised when there is a mild/benign shift. The rejection threshold for Table 2 was $10^6$ (as in Fig 3). WCTM unnecessary alarms are evidence that density-ratio estimation is harder for image data. We will add clear discussion of empirical vs nominal rates. - *CTM higher ADD:* Related to our discussion around Eq (17), once the WCTMs treat the cal data as fixed, they can sometimes reject faster than standard CTMs (the latter adds each test point to the cal set, which confuses the source/target distinction and sacrifices power). - *Benefit of WCTMs for tracking loss metric:* A loss metric can take the role of the nonconformity score, and WCTMs then would adapt to distribution shifts to maintain statistically valid estimates of how that loss compares to cal data. - *Suggested baselines/Relation to Lit:* (1) weighted CP methods are not comparable to WCTMs; rather, any WCP method offers a different opportunity to construct a WCTM *on top of it.* Ie, our expts are roughly construct WCTMs on top of WCP methods similar to [1]; in future work it would be valuable to consider WCTMs constructed for continual drift [2] and label shift [3]. We are happy to add comparisons to a subset of (2) and (3) in a camera-ready version, and we will discuss all of these more thoroughly in Related Work. In particular, [4-6] appear to primarily differ from our CTM baseline by betting function and ensembling (which can also be done with WCTMs). [7] is similar to us in the goal of online adaptation, although they focus on adapting the point prediction (while we adapt the weights); we have added more discussion of this. [8] and [9] could be compared to, although the CTM references are likely more closely related. **Other Weaknesses:** We have updated sec 3.3 subheading to “Main Practical Testing Objective.” **Other:** We are carefully revising to address typos, to improve clarity and readability. See response #1 to reviewer **b4ce** re contributions outline. $[0,1]^*$ is notation for a vector of any length with entries in [0,1], we will clarify. We meant Eq 7 is the definition for any $t$, we can update if unclear. **Eq 13 Q:** Should mean a shift to some $\hat{F}_X^T$ *such that* the density-ratio is $\hat{f}_X^T/f_X$. So, $f_X$ does *not* need to be known, only an estimate of the *ratio* is needed. We will update notation here. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal and clarifications. The information is scattered everywhere so it was a bit hard to parse. On a high level most of my questions have been answered but fundamentally I am still missing a proper definition of what benign vs. medium vs. extreme constitutes. Even the additional experiments are essentially following the premise that the method's (un)ability to adapt dictates the form of observed shift. Clearly, this categorization is both gradual and problem-specific, so perhaps the authors can instead point out the fact that such a distinction is primarily user-driven, based on the information derived from the martingale's behaviour. Secondly, I am still missing proper explanations and analysis on the false alarm guarantees (not the conformal coverage guarantees but the Type-I sequential testing error), validity of guarantees under different settings and distinction to CTMs, Ramdas' e-process etc. I think these guarantees are a fundamental aspect of the sequential testing paradigm and should clearly be discussed. Similarly, a lack of clarity remains regarding metric formulations, proper explanation of experiment parameters / selected parameter values etc. These should where important be included in the main paper, and else in an experimental details section in the appendix. The paper is overall very dense and difficult to parse beyond the high-level intuition (i.e. in its practical implementation and workings) so I think the paper story should be streamlined to focus on the key aspects and deliver a more focused experimental evaluation with proper discussions, and move additional experiments (of which by now there are many) into the appendix. **I think if the authors properly incorporate these points into the paper it would be substantially strengthened**. Overall, I still like the fundamental ideas of the paper, and I think it combines a few interesting concepts from conformal prediction / sequential testing to address an interesting problem. Albeit novelty is a bit scattered, I think the overall combination and problem setting is of interest to the community. **What is positive to see across all the experiments (incl. rebuttal) is that the approach is fairly consistent in its behaviour**, and even though there is a lack of baselines its results seem promising. So, under the presumption that the authors take these comments to heart I am raising my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response! (And, sorry for the info being scattered, due to char limits.) We especially appreciate your further feedback, which we do take to heart, & we agree that it can substantially strengthen our paper--we recognize that incorporating it is in our own interest, for our paper to be as well-received as possible. We plan to revise to add clarifications to each of your questions (including experimental details), streamlining as you describe, acknowledging any limitations (eg, other baselines or settings that could be added), & more. **Benign vs medium vs extreme covariate shifts:** First, we will acknowledge in the paper that these distinctions are “gradual and problem-specific” & can be “primarily user-driven.” However, even if partly subjective, we do not think they are arbitrary--in particular, the distinction we tried to make so far is that benign shifts are those where the “safety” of CP coverage is maintained nontrivially. Ie, the coverage-validity intuition for “benign” shifts corresponds to the WCTM’s null hypothesis (& harmful shifts violate it) as follows: - *Benign:* The martingale’s null hypothesis is that the WCP $p$-values are IID uniform (App. C.2); this null implies that coverage is satisfied (exactly) for all $\alpha\in [0,1]$ (ie, the martingale’s null $\implies$ intuitive definition of “benign” regarding coverage validity). - *Harmful:* (Contrapositive of the above.) If coverage is *not* satisfied (exactly) for some $\alpha\in [0,1]$, then the $p_t$ are *not* IID Uniform[0,1] (ie, violation of coverage validity $\implies$ violation of martingale’s null, thus possibility for detection). - Note: This can be due to under- or over-coverage (we further penalize trivial overcoverage, where $\hat{C}(X_{n+1})=\mathcal{Y}$, as in the link’s S1.2). - *Medium:* A shift may initially be “harmful” as described above, due to density-ratio estimator having insufficient data, but later become “benign” once it has enough data. We will note that further study of robustness (ie, when is density-ratio estimator “good enough”) & power (ie, how quickly can harmful shifts be detected) are important directions for future work. Additionally, regarding the difficulty/subjectivity in interpreting martingale values, we note that this is not unique to WCTMs; eg, Jeffreys’s “rule of thumb” is cited in related work including Vovk (2021) "Testing Randomness Online" (pg 601) for interpreting the strength of evidence. **Explanations & analysis on the sequential false alarm guarantees:** We agree that it is important for the anytime-valid (3.2) & scheduled/ARL (3.3) sequential false alarm guarantees to be thoroughly explicated, analyzed, & empirically evaluated; we will further revise so that these points are even clearer in the paper. - WCTMs achieve the same strength of guarantees as CTMs, but under different null hypotheses: our (3.2) has the same form as “strong validity” in Vovk (2021) & our (3.3) takes the form of his prop 4.2 in that paper; the difference is that *WCTMs achieve these guarantees under null hypotheses different than exchangeability* (we practically focus on the covariate shift null). The anytime-valid guarantee (3.2) controls the probability of raising a false alarm (despite null being true) at *any* time in an infinite run; meanwhile, the average-run length (ARL) guarantee (3.3) lower bounds the expected “expiration date” of the method until it needs to be reset, which is why it is called “scheduled” or “multistage.” - Re evaluations, as we state at the end of Sec 4.1, the blue in 3rd column of Fig 2 empirically validates Thm (3.1); 4th column validates (3.2); 5th column validates (3.3). - In Sec 4.3, we compare to the Ramdas e-process baseline by standardizing the (W)CTM & e-processes’ false alarm rates under their respective nulls at $P(alarm) \leq 0.01=1/c$ for anytime-valid methods & $ARL \geq 20,000=c$ for scheduled methods (see “Specific baseline e-process” in response to reviewer oGBa). - (Time permitting) We hope to add additional baselines mentioned by the reviewer, and/or an additional distribution shift setting. **Experimental details:** We have added full experimental details for the image-data experiments to the provided link’s S3 (Experiment Details), & we will add these tables to the paper appendix (with key information in the main paper); we will also double-check that all synthetic-data & tabular-data experimental details as well (or add any details that were not included before). We will ensure that all metrics are clearly defined. **Streamlining paper:** We appreciate, agree with, & will do our best to incorporate the reviewer’s suggestions on how the paper can be streamlined to be more accessible & clear, such as with a more focused experimental evaluation in the main paper. **Summary & thank you:** Again, thank you for your comments! We assure that we will work faithfully to incorporate your feedback & address your comments.
Summary: This paper proposes WATCH, a novel method that is able to check machine learning models after deployment to identify if their input data changes unexpectedly. WATCH uses a new approach, Weighted Conformal Test Martingales (WCTMs), to detect these changes. WATCH can ignore small data changes that do not affect the model much and only raises warnings when big changes occur. The authors conduct experiments on real datasets and show that WATCH performs better than previous methods at correctly identifying important data shifts without many false warnings. Claims And Evidence: - The claim on the contribution of using weighted conformal p-Values could be discussed more clearly. After reading section 3.1, i feel i am not clear on whether weighted conformal p-values are novel in this paper. I would like the authors to indicate clearly on how this is related to prior works such as [1, 2]. - Is the notion of the concept shift defined before? I couldn't find any discussion on this. I think this is important, given that this paper shows benefit most on this type of shift. So i wonder if it is new or it is considered an important task in this area before. I would love the authors to provide more evidence in this aspect to support the claim of the strong results. [1] Conformal validity guarantees exist for any data distribution (and how to find them). 2024. [2] Conformal prediction under covariate shift. 2019. Methods And Evaluation Criteria: - In the experiment section, the evaluated models are mostly simple neural networks, which seems rather limited (e.g., MNIST and CIFAR-10). I wonder if the evaluation can be conducted on large models and datasets, for example, ViT on ImageNet. - In the paper, the authors mostly consider covariate shift or concept shift, I wonder if there are other types of distribution shifts that could exist. I would like to see more discussion on this aspect. Theoretical Claims: I check the proof of theorem 3.1, 3.2 and 3.3, which is correct. But i would suggest to highlight the idea of the proof in the main text, rather than deferring them entirely to the Appendix. Experimental Designs Or Analyses: I find no issue in the soundness or validity of the experimental design or analysis. The evaluation mostly follow prior setups in this area. Supplementary Material: Yes, I viewed all the parts in the supplementary material. Relation To Broader Scientific Literature: I think the idea of detecting changes in the test data distribution could have many practical applications. The first is to monitor the deployment setting in real time. Also, this could help model developers adapt the model better to address potential catastrophical failures. Essential References Not Discussed: I don't have concerns on this aspect. Other Strengths And Weaknesses: - I am wondering if the authors could discuss more on how this result is practical, like in a real-world setting, how can it be used in large-scale machine learning systems. This could help reader better understand the impact of the paper. - If the shift is detected, then what can we do to address the problem? Are there ways to leverage the proposed detection method to improve the model dynamically at deployment time? If so, how would you do it? Other Comments Or Suggestions: None Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time and feedback! Please find our responses to your questions/comments roughly in order below. We refer to the following anonymous link for supplemental figures and algorithms: https://sites.google.com/view/authorresponse/home **Claims and Evidence:** - *Novelty of weighted-conformal $p$-values:* The general weighted-conformal $p$-values presented in Section 3 are novel in this paper. References [1] and [2] have a similar setup based on a weighted empirical distribution of nonconformity scores, but they only consider taking quantiles on the weighted distribution to compute prediction sets, they do not define, introduce, or mention weighted-conformal $p$-values nor any equivalent quantity, and they do not discuss how weighted conformal ideas can be used for hypothesis testing or monitoring (the focus of our paper). - *References for concept shift:* Concept shift--a shift in $Y|X$, the label distribution conditional on the input--is a common term and topic of study in the ML robustness literature. The earliest refs we could find discussing the idea of concept shift are the following--we have added these to the paper: (a) Webb, G. I., & Ting, K. M. (2005). On the application of ROC analysis to predict classification performance under varying class distributions. Machine learning, 58, 25-32; (b) Fawcett, T., & Flach, P. A. (2005). A response to Webb and Ting’s on the application of ROC analysis to predict classification performance under varying class distributions. Machine Learning, 58, 33-38. While prior CTM methods are also capable of detecting concept shift, our main novel contribution regarding concept shift is to enable valid continual monitoring for concept shift *even when covariate shift is also possible,* i.e., by enabling diagnosis between the two, and demonstrating fast detection empirically. At the link we have provided above, we have added additional ablation experiments on the simple synthetic data to illustrate how this behavior. **Methods And Evaluation Criteria:** - *Eval on larger models and datasets:* The theoretical guarantees of our methods make no assumptions on the ML model architecture or on the modality/complexity/size of the dataset, so yes in theory we could demonstrate similar performance with ViT on ImageNet. Our paper focuses on laying a theoretical and algorithmic foundation for WCTM-based monitoring methods, while opening the door to future evaluation on frontier and foundation models. - *Other types of shifts:* Yes, there are other types of shifts that can be studied. One example is “label shift,” which is a shift in the marginal $Y$ distribution (but not in $Y|X$); another example is slow distribution drift over time (rather than at a single changepoint). We have added discussion of WCTMs for these settings to our Conclusion. **Other Strengths And Weaknesses:** - *Practical use in large-scale ML system:* In large-scale ML deployments, the proposed WATCH framework could be used to monitor the performance of the system to detect harmful behavior patterns (e.g., increases in LLM hallucinations, jailbreaking attacks, toxic prompts/generated text), while also adapting online to more benign shifts (e.g., seasonality, user trends, etc). For instance, WCTMs could be run to monitor the distribution of LLM outputs while serving as a “safety filter” to provide probabilistic guarantees on the safety of the output shown to users. The massive computational cost of training large models paired with their wide-reaching effects makes the task of monitoring crucial, to quickly catch unsafe behavior while also minimizing costly retraining. - *Improving model dynamically at deployment time:* WATCH implements one approach to dynamically improving the model at deployment time, that is by performing online density-ratio estimation to maintain prediction safety (coverage) and informativeness (widths). Root-cause analysis can also inform other approaches to improving performance at deployment time, eg, via distributionally robust training (to be robust to detected shifts). **Spillover from Reviewer 7BRr (char lim)--Methods & Evaluation Criteria:** - *Specific baseline e-process:* We compared to the betting-based $e$-process in Podkopaev & Ramdas (2021) (from Waudby-Smith & Ramdas (2024)) since those methods performed best and to standardize. We compared WCTMs to sequential testing variant with standardized anytime-false alarm rate of 0.01; for SR-WCTMs we compared vs changepoint detection variant with common average-run-length (under the null) of 20,000. - *Runtime metric:* Runtime can be surprisingly nontrivial for monitoring long time-horizons. Ie, the changepoint baseline in Podkopaev & Ramdas (2021) has $O(t^2)$ complexity (due to starting a new sequential test at each time $t$); our comparable SR-WCTM is only $O(t)$. - *Coverage vs alarms:* In our Appendix, we plan to add coverage & width plots for all expts in our paper; some are available already at the link. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal and my concern was mostly addressed. It would be better to show some results on larger models and datasets. Overall the paper is technically novel and therefore I maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response! In particular, thank you for confirming that your concerns have mostly been addressed and for recognizing the technical novelty of our contributions! We agree that it would be promising and valuable to also demonstrate results on larger models and datasets (eg, ViT on ImageNet) as an expansion on our current empirical results (which currently include neural networks evaluated on synthetic data, real tabular data, and real image data [MNIST & CIFAR-10]). Demonstrations on larger models and datasets is an important practical direction that our paper opens the door to--especially given that our methods require no formal assumptions/restrictions on the model, or on the size, modality, or dimensionality of the dataset--but we leave this direction open for future work.
Summary: This works proposes a weighted generalization of conformal test martingales (WCTMs), for online change point detection, which can continuously adapt to benign shifts without raising unnecessary alarms and quickly detect harmful shifts. Claims And Evidence: partially, see "Other Comments Or Suggestions" below, mainly the distinction between mild covariate shifts and extreme covariate shifts is not clear. Methods And Evaluation Criteria: yes Theoretical Claims: partially, the theorems seems to be correct to the best of my knowledge. Experimental Designs Or Analyses: yes Supplementary Material: appendix B - the real dataset part. Relation To Broader Scientific Literature: it is related to the broad community of online learning and distributional change detection, and is mostly related with Vovk 2021 and Vovk 2022 which studies the non-weighted version of conformal test martingale. Essential References Not Discussed: N/A, and I am not fully familiar with all literature in this area. Other Strengths And Weaknesses: Strengths: a new weighted version of CTM that can adapt to benign changes and detect harmful changes quickly. Weakness: the writing can be improved, and more mathematical concepts and a clearer definition of benign and harmful changes will make the problem setup easier to understand. Other Comments Or Suggestions: The mild covariate shifts and extreme covariate shifts are a bit vague in this paper, without clear distinction in terms of, for example, the covariate shift magnitude. Only with a clearer definition of benign shifts and harmful shifts, users can adjust the boundary between benign and harmful and tailor the detection algorithm accordingly. Due to this reason, the overall detection algorithm seems vague to me, and I would appreciate it if the author(s) could provide a detailed algorithm (including the choice of f functions) for performing the detection online, and justify how the algorithm treats benign and harmful shifts separately. The proposed method seems to depend a lot on Vovk 2021 and Vovk 2022. And it should be discussed whether the weights added is the only difference or maybe there are more innovations as compared with the previous literature. Questions For Authors: see above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your time and feedback! We refer to the following anonymous link for supplemental figures and algorithms: https://sites.google.com/view/authorresponse/home (1) **Clarifying writing, especially novelty of contributions relative to Vovk et al:** Regarding your comment on how “writing can be improved,” we are actively revising our paper for clarity. For example, re your comment on clarifying “innovations as compared with the previous literature,” we have revised the last part of our Introduction section to the following (this references a slightly revised Fig 1, available at the provided link): Recent advances in anytime-valid inference (Ramdas et al., 2023) and especially conformal test martingales (CTMs) (Volkhonskiy et al., 2017; Vovk, 2021) offer promising tools for AI monitoring with sequential, nonparametric guarantees. However, existing CTM monitoring methods (e.g., Vovk et al., (2021)) all rely on some form of exchangeability (e.g., IID) assumption in their null hypotheses---informally, meaning that the data distribution is the same across time or data batches---and as a result, standard CTMs can raise unnecessary alarms even when a shift is mild or benign (e.g., Figure 1a). Meanwhile, existing comparable monitoring methods for directly tracking the risk of a deployed AI (e.g., Podkopaev and Ramdas (2021b)) tend to be less efficient than CTMs regarding their computational complexity, data usage, and/or speed in detecting harmful shifts (Sec. 4.3). Our paper's contributions can be summarized as follows: - Our main theoretical contribution is to propose weighted-conformal test martingales (WCTMs), constructed from weighted-conformal $p$-values, which generalize their standard conformal precursors. WCTMs lay a theoretical foundation for online testing of a broad class of null hypotheses beyond exchangeability, including shifts that one aims to model and adapt to. - For practical applications, we propose WATCH: Weighted Adaptive Testing for Changepoint Hypotheses, a framework for AI monitoring using WCTMs. WATCH continuously adapts to mild or benign covariate shifts (e.g., Figure 1a) to maintain end-user safety and utility (and avoid unnecessary alarms), while quickly detecting harmful shifts (e.g., Figure 1b \& c) and enabling root-cause analysis. (2) **Additional Explanatory Ablation Experiments on Covariate Shift Magnitude:** To address your comment that our distinction between mild and extreme covariate shifts are somewhat “vague,” we have added additional ablation experiments to illustrate how WATCH performs with different magnitudes of covariate shift--please see Supplement S1.1 at the provided link. Ie, this ablation experiment empirically clarifies that covariate shifts can indeed be viewed as a spectrum from benign to harmful rather than as a binary category. For the monitoring purpose that our paper focuses on, however, we consider “benign” shifts to be those that can be safely ignored without sacrificing system safety (i.e., coverage) or system informativeness/utility (i.e., prediction set sharpness/interval width); our WCTM methods can thus be understood as performing a flexible (nonparametric) statistical test for whether a given shift is benign or harmful to the system, in this sense, with sequential false-discovery guarantees given by our reported theoretical results. (3) **Added Pseudocode, Highlighting How Algorithm Treats Benign vs. Harmful Shifts:** We are adding comprehensive pseudocode for all proposed algorithms as a new appendix section (Appendix E). For now, we have provided the annotated pseudocode for the specific sub-module of our methods that responds slightly differently to mild versus extreme shifts in $X$--please see Supplement S.1.2 for this pseudocode at the provided link. Specifically, the pseudocode highlights one way that our methods penalize (and thereby more quickly detect) covariate shifts that are so extreme (i.e., out-of-support or very little support) that the corresponding weighted conformal interval becomes noninformative (i.e., predicting the whole label space, $\hat{C}(X_{n+1})=\mathcal{Y}$ ). (4) **Spillover responses for Reviewer 7BRr (due to char lim) -- Methods And Evaluation Criteria:** - *Widths shrunk in Fig 2:* Due to the density-ratio estimator appropriately placing more weight on cal points with smaller scores--no further correction. - *Ablations on betting function:* In S2.3 at the provided link, we compare the Composite Jumper betting function to the Simple Jumper betting function used in baselines (eg, Vovk et al). - *Classification into benign and severe shifts:* See responses #2 & #3 to reviewer b4ce. Regarding a “target risk score,” note that (W)CTMs test for IID uniformity on the $p$-values; IID uniform $p$-values implies valid coverage at any significance level, which is more powerful than for a threshold. So, we argue our def is at least as principled. See sec 4.2 at the link provided for coverage results.
null
null
null
null
null
null
null
null
EBMaC: Empirical Bayes and Matrix Constraints for Label Shift
Reject
Summary: This paper introduces EBMaC (Empirical Bayes and Matrix Constraints), a new method for estimating importance weights in label shift problems. EBMaC uses hierarchical models via empirical Bayes to accommodate data dispersion beyond what multinomial models allow, and employs linear programming techniques to compute tighter confidence intervals for importance weights. Claims And Evidence: The paper appears incomplete and poorly written, with significant sections seeming truncated or missing, making it difficult to follow the full methodology and properly evaluate the claimed contributions of the EBMaC approach. Methods And Evaluation Criteria: 1. The details of other baselines, such as BBSE and MLLS, are lacking. These details could help further clarify the superiority of EBMaC. 2. The figures are too fancy to understand. For Figure 2 right panel, what do the different bars in the same cluster mean? Moreover, what is the role of providing the p-value? The author should show the result of empirical marginal coverage, which is a necessity for CP. Theoretical Claims: I have checked the proofs of theoretical claims. 1. The mathematical notations are confusing. The authors should standardize mathematical notations throughout the paper. For instance, $\alpha$ is used to represent both the concentration parameter and the confidence level of conformal prediction, which creates ambiguity. Experimental Designs Or Analyses: 1. In section 4.4, the results of empirical marginal coverage and the ratio of the volume of $\Omega$ to $\Omega_{GE}$ should be provided. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ___\#1.___ _The paper appears ... EBMaC approach._ **Response** We actually spent much effort to write the paper, we apologize that you found it poorly written. Our contributions are that we firstly fomulated the problem under the EB framework, while the current methods are largely frequentist. Secondly, it is a new propose to estimate shorter confidence interval, which entails solving a interval-type linear syetem. Now, we revised the manuscript to make it coherent. Additionally, we added additional references for better understanding the motivation and the problem context. Please see our response to \#1 of Reviewer VSQq and \#1 of Reviewer iV7y. ___\#2.___ _The details ... of EBMaC._ **Response** Our current explanations on BBSE and MLLS are in the second and the third paragraph of Introduction. We added further descriptions of BBSE and MLLS for formulation, estimation construction, error bounds, and their limitations in the revision. We hope these details could clarify the superiority of EBMaC. ___\#3.___ _The figures ... necessity for CP._ **Response** We made efforts on encompassing comprehensive information in figures, and will do better in the revision. The different bars in the same cluster reflect different sample sizes. We chose the circular bar plots to compactly display the proportions that are highly varying from 0\% to 99.5\% across different datasets. The p-values is to show whether the changes of confidence interval length is statistically significant, or "real" in a layman's language. Points above the dashed line indicate statistically significant improvements, while those below may be due to randomness. We elaborated above points in the revision. We now added the empirical marginal coverage. The following is the empirical marginal coverage of 95\% confidence sets $\boldsymbol\Omega_\rm{GE}$ and $\boldsymbol\Omega_\rm{BH}$ for CIFAR-10 dataset when $\alpha = 10^p$ for $p=-3,...,3$ and sample size $m=1000,...,8000$. Due to character limit, we show the results about $\omega_1$. We see that BH and GE has close marginal coverage probabilities which are higher than 0.95, while BH is tighter than GE. [GE Coverage] |sample size|-3|-2|-1|0|1|2|3| |-|-|-|-|-|-|-|-| |1000|1.00|0.99|1.00|1.00|1.00|1.00|1.00| |2000|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |3000|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |4000|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |5000|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |6000|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |7000|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |8000|1.00|1.00|1.00|1.00|1.00|1.00|1.00| [BH Coverage] |sample size|-3|-2|-1|0|1|2|3| |-|-|-|-|-|-|-|-| |1000|0.98|0.97|0.99|1.00|1.00|1.00|1.00| |2000|0.99|0.99|1.00|1.00|1.00|1.00|1.00| |3000|0.99|0.99|1.00|1.00|1.00|1.00|1.00| |4000|1.00|0.99|1.00|1.00|1.00|1.00|1.00| |5000|1.00|0.99|1.00|1.00|1.00|1.00|1.00| |6000|1.00|0.99|1.00|1.00|1.00|1.00|1.00| |7000|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |8000|1.00|1.00|1.00|1.00|1.00|1.00|1.00| [Length Ratio (GE/BH) for $\omega_1$] |SampleSize|-3|-2|-1|0|1|2|3| |-|-|-|-|-|-|-|-| |1000|1.128|1.051|1.013|1.009|1.010|1.010|1.010| |2000|1.096|1.056|1.010|1.010|1.011|1.011|1.011| |3000|1.091|1.068|1.011|1.010|1.012|1.011|1.011| |4000|1.121|1.038|1.011|1.011|1.011|1.012|1.012| |5000|1.119|1.055|1.013|1.011|1.012|1.012|1.012| |6000|1.089|1.049|1.012|1.011|1.012|1.012|1.012| |7000|1.115|1.063|1.015|1.012|1.012|1.012|1.013| |8000|1.118|1.062|1.013|1.012|1.013|1.013|1.013| ___\#4.___ _The mathematical ... ambiguity._ **Response** Thank you for catching this. We have changed the confidence level of the conformal prediction to $1-\zeta$. We also changed the Dirac measures from $\delta$ to $\kappa$ to avoid potential confusion with the significance levels. We went through the entire paper and did not detect other potential notational confusions. We thank you for raising this issue. ___\#5.___ _In section 4.4, ... provided._ **Response** Thank you for asking about this. The empirical marginal coverage is presented in the response to your \#2. We added the ratio of the volumes ($\boldsymbol{\Omega}$ / $\boldsymbol\Omega_\rm{GE}$) in the revision. Due to character limit, we only show the volume ratios (\%) for CIFAR-10 for $\alpha = 10^p$ for $p=-3,...,3$ and sample size $m=1000,...,8000$. |sample size|-3|-2|-1|0|1|2|3| |-|-|-|-|-|-|-|-| |1000|14.8|19.8|26.8|31.2|28.2|28.7|28.7| |2000|14.3|18.0|25.4|30.0|27.5|28.0|28.0| |3000|13.2|16.1|24.0|28.9|27.3|27.9|27.9| |4000|11.5|16.2|24.0|29.0|27.7|28.0|28.1| |5000|10.9|14.0|22.7|28.5|27.4|27.9|28.0| |6000|11.0|14.0|22.6|28.4|27.4|27.8|27.8| |7000|9.80|12.4|21.9|28.0|27.2|27.7|27.7| |8000|8.80|12.0|21.8|28.0|27.4|28.0|27.9| --- Rebuttal Comment 1.1: Comment: Thank you for your response. - Providing detailed descriptions of methods for label shift could assist readers unfamiliar with the label shift (e.g., those from the conformal prediction community) in better understanding your contribution. - It is necessary to clarify that the p-values in figures are derived from a one-sided t-test, as p-value is also a key concept in conformal prediction (CP). Insufficient explanation may lead to confusion. - Regarding the third point about coverage, your experimental results show coverage rates very close to 1. However, according to Si et al. (2023), their coverage is not as close to 1. Could you explain why your algorithm produces such conservative results? Additionally, your paper does not provide a comparison of prediction set sizes, which is a fundamental metric in conformal prediction. --- Reply to Comment 1.1.1: Comment: Please take $\bf w=\boldsymbol\omega$. ___\#6.___ _Providing detailed descriptions of methods for label shift ... understanding your contribution._ **Response** Thank you for your suggestions. We described the methods and clarified our contributions below. We will incorporate these into the revision. In terms of estimation of $\bf w$, a pioneering work BBSE defined the plug-in estimator $\widehat{\bf w}=\widehat{\bf C}^{-1}\widehat{\bf q}$, where $$\widehat C_{ij} = m^{-1}\sum_{s=1}^m\mathbb{1}[g({\bf X}_s) = i, Y_s = j],$$ $$\widehat q_k=n^{-1}\sum_{t=1}^n\mathbb{1}[g({\bf X}_t)=k]$$ for $i,j,k=1,...,K$ are sample estimators for $C_{ij}$ and $q_k$, respectively. They showed that $\|\widehat{\bf w}-{\bf w}\|_2$ is of order $\sqrt{n^{-1}\log n + m^{-1}\log m}$. RLLS estimated $\bf w$ as the minimizer of $\|\widehat{\bf C}{\bf w}-\widehat{\bf q}\|+\lambda\|\bf w-\mathbb{1}_K\|_2^2$, where $\mathbb{1}_K$ is the vector of ones of length $K$, and $\lambda$ is the regularization parameter. They showed $\|\widehat{\bf w}-{\bf w}\|_2$ is of error rate $\sqrt{n^{-1} + m^{-1}}$. Lastly, MLLS used likelihood approach to estimate ${\bf w}$. Let the $y$-th component of ${\bf w}$ be $\omega_y = q(y)/p(y)$ for $y=1,...,K$. MLLS used a classifier to approximate $p(y)$ and $p(y|\bf x)$ of the source domain, and then obtained the MLE for $q(y)$ of the target domain to form $\widehat{\bf w}$. In contrast to these three methods, we use the empirical Bayes approach to estimate ${\bf w}$, where we incorporate multiple classifiers and a hierarchical model. This is the first Bayesian method in this area and is our first contribution. To find confidence set for ${\bf w}$, using asymptotic error bounds derived in BBSE and RLLS is possible, but it does not guarantee the coverage under finite samples. In the finite sample case, the confidence set can be built by the confidence intervals of ${\bf C}$ and ${\bf q}$. Si et al. (2023) used Gaussian elimination in solving ${\bf w}={\bf C}^{-1}{\bf q}$ with intervals through error propagation to get the confidence set. In contrast, we bypass the matrix inversion problem and formulate the linear inequalities directly from ${\bf C}{\bf w}={\bf q}$. It can be easily solved by linear programming, and we find the set is optimal. This tighter confidence set of finite sample coverage guarantee is our second contribution. ___\#7.___ _It is necessary to clarify that the p-values ... lead to confusion._ **Response** Actually, the explanation of p-values in Figures 2-4 is in Sec 4.4. Specifically, we wrote that "we perform a one-sided $t$-test to evaluate whether the log-ratio of the lengths $\log(l_{k, \mathrm{GE}}/l_{k, \mathrm{BH}})$ is greater than zero across the labels". Here, $l_{k,\rm GE}$ and $l_{k,\rm BH}$ are the lengths of confidence interval for $\omega_k$ derived from GE and BH method. We added this to the caption of the figures in the revision for more clarity. ___\#8.___ _Regarding the third point about coverage... in conformal prediction._ **Response** We want to clarify that we are not working on prediction set estimation but on forming confidence sets for $\bf w$. The confidence set will allow people to further perform conformal prediction under label shift, but we did not get into the specifics in conformal prediction in this manuscript. In our construction of the 95\% confidence interval (see the response to your \#3), we need to engage the confidence sets for elements in $\bf C$ and $\bf q$. The results are conservative due to the Bonferroni adjustment on the confidence levels. Although Si et al (2023) created a confidence set for $\bf w$, they did not evaluate it in their simulations. Instead, they only reported the coverage rate of the PAC prediction sets for $Y$. Hence, our results and theirs are not comparable. We implemented the confidence intervals provided by Si et al (2023), and the result shows that our intervals are consistently shorter (Figures 2-4). Using tweak-one label shift setting, we applied conformal prediction to CIFAR-10 data to find the empirical coverage rate of prediction set and its size. We compared the results of true importance weights (oracle) and our method in Theorem 3.3 using $\Omega_{\rm BH}$, for prediction level 95\%, 90\%, 85\%, and 80\%. In our method, we set $\delta = 0.01$ and $\alpha$ that makes $1-\delta-\alpha$ be the prediction level. We set $m_1 = 1000$ and used $m=n=5000$ to form $\Omega_{\rm BH}$. The size of test set in the target domain is 900. We can observe that the prediction sets from $\Omega_{\rm BH}$ achieve the nominal coverage rate with nearly oracle set sizes. From these, we can show that our method guarantees the finite sample coverage without having to estimate the importance weights. |Prediction Level|oracle cvg(%)|BH cvg(%)|oracle size(se)|BH size(se)| |-|-|-|-|-| |95%|93.8|97.3|1.4(1.2)|1.8(2.0)| |90%|88.3|92.7|1.1(0.6)|1.2(1.0)| |85%|83.4|85.9|1.0(0.5)|1.0(0.5)| |80%|78.2|81.2|0.9(0.4)|1.0(0.4)|
Summary: This paper aims to estimate and infer confidence intervals for the importance weights under the assumption of label shift. In prior work, this was typically done by first estimating the confusion matrix of a given classifier on the source domain and its predicted label prevalence, with the estimation performed by solving a linear system. Instead, the authors propose using multiple classifiers along with a Bayesian modelling approach. They model both the confusion matrix and predicted prevalence using a multinomial distribution with a Dirichlet prior. The latent distribution is then estimated via maximum likelihood estimation. When introducing a new classifier, the authors suggest using posterior inference to estimate the confusion matrix C and predicted prevalence q, or their distribution. Additionally, they propose a method to derive a confidence interval for the importance weights given a confidence interval for the confusion matrix and predicted prevalence. Instead of using Gaussian elimination by upper and lower bounding the steps to solve the linear system, they formally characterize the solution (Theorem 3.1). They extract a confidence interval via linear programming, which they proved to yield tighter bounds compared to prior approaches, such as Gaussian elimination. These tighter confidence intervals naturally lead to improved prediction sets when applying conformal prediction under label shift. Claims And Evidence: The two proposals are sound and original. The use of multiple classifiers appears to be a good idea, and the Bayesian approach provides a natural way of combining their information. Additionally, the derivation of a tighter confidence interval by formulating it as a linear programming problem is also very interesting. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, the proof of Theorem 3.1 and Corollary 3.2. Relation To Broader Scientific Literature: Deriving tighter confidence intervals through the linear system instead of following the steps of Gaussian elimination is a valuable contribution to the literature. However, I am uncertain about the advantages of the Bayesian modelling approach, particularly the impact of using multiple classifiers in the methodology and the potential loss of the coverage guarantee given by prior approaches. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: > What additional benefits does the proposed Bayesian modeling provide compared to the classical estimation of the confusion matrix and prevalence? > Does Bayesian modeling maintain the same coverage guarantees as prior works that used classical confidence intervals for inference? If not, what advantages does Bayesian modeling offer? > How does the number of classifiers impact performance? Can the authors provide results for both a small and a large number of classifiers? > What properties should the classifiers have? Should they exhibit diverse performance levels? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ___\#1.___ _What ... prevalence?_ **Response** Our Bayesian modeling consists of hierarchical modeling with Dirichlet prior and hyperparmeter estimation with empirical Bayes. First, posing a prior increases model flexibility to accommodate more data distributions, for example, overcoming the limitation of multinomial model that the variance is not greater than the mean. Also, it has a connection with the regularization that prevents extreme estimates of confusion matrix and prevalence. The advantage of empirical Bayes is that we can naturally incorporate multiple classifiers which generates the confusion matrix and prevalence. If we had done it in a frequentist approach, combining results from multiple classifiers would lead to a model averaging problem, which has its own complication in both determining the averaging weights and in deriving theoretical properties. Moreover, when we have an additional classifier, our method can easily update the estimated hyperparameter, which enables online learning with empirical Bayesian framework. Thank you for asking about this. We added clarification on this point in the revision. ___\#2.___ _Does ... offer?_ **Response** Indeed, given the prior model and associated model parameters are correct, then the Bayesian approach maintains the targeting coverage guarantee. But, the prior model could be wrong, and the model estimation has variability, in which case we would no longer have coverage guarantee. However, we estimate the hyperparameter through the data by empirical Bayes, which is a data-driven method. In this way, we can properly approximate the true hyperparameters, which leads to the asymptotic coverage guarantee of confidence intervals. This is also related to the response to your #3. In this work, the additional advantage of the Bayesian method is in terms of incorporating multiple classifiers, etc. Please see our response to your \#1 on this aspect. In the revision, we added clarification on the coverage guarantee. ___\#3.___ _How ... classifiers?_ **Response** This is a very important question. The number of classifiers plays the role of sample size and it affects the estimation of the parameters in the prior distribution, i.e., the hyperparameters ${\boldsymbol\alpha}_s$ and ${\boldsymbol\alpha}_t$. Because the estimation of ${\boldsymbol\alpha}_s$ and ${\boldsymbol\alpha}_t$ is through maximum likelihood method, generally, $\widehat{\boldsymbol\alpha}_s-{\boldsymbol\alpha}_s$ has bias $b$ and variance $\sigma/G^{1/2}$, where $b$ and $\sigma$ are respectively the bias and standard deviation of a single classifier, and $G$ is the number of classifiers. Thus, we can say that if the original classifiers are consistent, then more classifiers will help reduce the overall error. We added discussions on this in the revision. Furthermore, we did quick experiments on MNIST dataset with different number of classifiers to compare the empirical performance of the method. For each concentration parameter $\alpha = 10^p$ for $p=-3,...,3$ and sample size $m=1000,...,8000$, the overall MSE of $\widehat{\boldsymbol\omega}$ is slightly smaller for 101 classifiers than for 11 classifiers, which agrees with above analysis. [$\log_{10}(MSE)$ for 11 classifiers with MNIST] |sample size|-3|-2|-1|0|1|2|3| |-|-|-|-|-|-|-|-| |1000|-2.49|-2.49|-2.72|-3.23|-3.46|-3.65|-3.68| |2000|-2.58|-2.63|-2.90|-3.46|-3.70|-3.85|-3.88| |3000|-2.69|-2.71|-3.01|-3.57|-3.81|-3.94|-3.95| |4000|-2.72|-2.81|-3.09|-3.65|-3.89|-3.97|-4.00| |5000|-2.83|-2.84|-3.16|-3.70|-3.94|-3.99|-4.00| |6000|-2.88|-2.91|-3.21|-3.75|-3.96|-4.02|-4.02| |7000|-2.94|-2.98|-3.22|-3.81|-4.01|-4.04|-4.05| |8000|-2.99|-3.00|-3.28|-3.86|-4.03|-4.07|-4.07| [$\log_{10}(MSE)$ for 101 classifiers with MNIST] |sample size|-3|-2|-1|0|1|2|3| |-|-|-|-|-|-|-|-| |1000|-2.49|-2.50|-2.78|-3.38|-3.80|-3.93|-3.97| |2000|-2.60|-2.62|-2.97|-3.62|-3.97|-4.07|-4.11| |3000|-2.64|-2.67|-3.03|-3.69|-4.01|-4.07|-4.11| |4000|-2.71|-2.76|-3.09|-3.76|-4.04|-4.06|-4.09| |5000|-2.74|-2.83|-3.15|-3.77|-4.02|-4.06|-4.08| |6000|-2.81|-2.87|-3.16|-3.82|-4.04|-4.07|-4.09| |7000|-2.84|-2.90|-3.20|-3.85|-4.04|-4.09|-4.09| |8000|-2.90|-2.95|-3.24|-3.87|-4.08|-4.09|-4.10| ___\#4.___ _What ... levels?_ **Response** In terms of the diversity of the classifiers, we only require the classifiers are trained on domains $\cal{X}\times\cal{Y}$, and prior distributions of ${\bf C}_g$ and ${\bf q}_g$ generated by a classifier $g$ belong to the Dirichlet distribution family. Other than that, we do not impose any other requirement. When the classifiers truly have diverse performance in predictions, it may require a more sophisticated design of the prior to capture the nature of their diversity. We chose the Dirichlet distribution because it is a conjugate prior of the multinomial distribution. And it facilitates computation of the posterior distribution. We added discussions on this in the revision.
Summary: The paper presents a method to address the problem of label shifts and presents a method for estimation of importance weights and the corresponding confidence sets. It constructs confidence regions of the confusion matrix and the predicted label distributions using the empirical Bayes method. Tighter confidence intervals compared to the existing methods by appealing to Linear programming. Theoretical evidence via an empirical bayes framework is provided along with synthetic modifications of CIFAR datasets. ## update after rebuttal Thanks to the authors for the reply. I have updated my score to Weak Accept, even though there are concerns on the lack of practical datasets and status quo on this issue despite it being a relatively old problem and on the other hand, lack of works recently in this domain. Claims And Evidence: The paper lacks enough motivation for the problem setup. For a reader not familiar with the area, it does very little to motivate the problem with examples in the Introduction, and why should one care about this problem. The introduction reads more like a related work, and goes on to present one-line introduction to existing works which is not useful either. Given that the main paper is 7 pages, the space should be used to motivate the problem well. Secondly, In the light of the above comment, the experimental results which are mainly synthetic modifications of MNIST/CIFAR dataset are less than convincing of the proposed methodology. If the problem is important, which it seems to be, are there some realistic or more direct datasets on which the proposed method could be evaluated. Compared to the Lipton et al 2018, which is from the same domain and heavily reference throughout, the paper does little for a more detailed exposition of the ideas, and motivation. Methods And Evaluation Criteria: The evaluation criteria seems less than convincing, please see the second comment above on the datasets used. Secondly, the methods for comparison are from 2018 (BBSE) and 2019 (RLLS), which seem to be rather not so recent, and based on the implementation taken from another paper in computer vision (Ye et al. 2024). Theoretical Claims: Had one look over the theoretical claims presented in Section 3. Did not find any issues, but this is not the area I have worked in, so it is possible that I did not verify all the claims in detail. Experimental Designs Or Analyses: Please refer to the comments above under “claims and evidence”. The experimental design needs improvement, by taking more reaslistic datasets. To my understanding, for a problem as discussed in the paper, MNIST/CIFAR datasets (or their synthetic variations) are not right candidates. Supplementary Material: Did not review Relation To Broader Scientific Literature: The paper is largely compared to methods from 2018/2019, which are not that recent. However, since being unfamiliar with the literature, it’s hard for me to say how it is positioned wrt broader literature. Essential References Not Discussed: Not to my knowledge Other Strengths And Weaknesses: Positives The problem domain seems relevant but the paper does not do enough to motivate it. The theoretical claims seem convincing and take an orthogonal approach compared to existing methods. For weaknesses, please refer to the comments above regarding experimental setup, comparison with more recent methods, and motivation. Other Comments Or Suggestions: None Questions For Authors: As mentioned above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ___\#1.___ _The paper lacks enough ... to motivate the problem well._ **Response** In the revision, we have provided more motivation of the problem and why it is important to estimate the importance weights and the associated confidence sets. We added examples and discussed more existing works. We also put the problem in a broader context of distribution shift to help demonstrate the bigger picture. Please also refer to the response to \#1 of Reviewer VSQq. ___\#2.___ _Secondly, ... could be evaluated._ **Response** This is an interesting point. Indeed, in the most important works on this topic (BBSE (Lipton et al., 2018), RLLS (Azizzadenesheli et al., 2019), and MLLS (Alexandari et al., 2020)), the datasets analyzed are typically MNIST, CIFAR-10 and CIFAR-100, and synthetic modifications are always used. These are considered as benchmark datasets to evaluate a method. For these reasons, we also implemented our method EBMaC on the synthetic modification of these datasets, and we found EBMaC has a superior performance. ___\#3.___ _Compared to the Lipton et al 2018, ... and motivation._ **Response** In the revision, we now added more detailed exposition of our main ideas and motivations. In terms of the main ideas, after some preliminary analysis, the central problem becomes finding the confidence intervals for the elements in $\boldsymbol{\omega}=\bf C^{-1} \bf q$. For this purpose, our first idea is to use an empirical Bayes approach to estimate ${\bf C},{\bf q}$ and form the confidence intervals for all the elements in ${\bf C},{\bf q}$. This approach has the advantage of allowing us to incorporate multiple classifiers and is an un-explored approach in label shift domain. The second main idea is to form the tightest confidence intervals for the elements in $\widehat{\boldsymbol{\omega}}$ by making use the confidence intervals for ${\bf C},{\bf q}$ and solving the interval-type linear system $\bf C\boldsymbol{\omega}= \bf q$. Conceptually, we achieve the tightness of the bound via making use of and interpreting the precise probability statement of confidence intervals. Technically, we achieve the tightness via linear programming. We now revised the paper to better reflect these ideas. For the general ideas in the domain and the motivation, please also refer to our response to your \#1 and the response to \#1 of Reviewer VSQq. ___\#4.___ _The evaluation criteria seems ... (Ye et al. 2024)._ **Response** Indeed, all the important works that contain baseline methods in the estimation and inference for the importance weights use MNIST/CIFAR datasets and incorporate synthetic modification. Please also see our response to your \#2. There have been three most recognized methods for estimating the importantce weights, namely BBSE, RLLS and MLLS. All three are still considered as the most important and successful methods in the literature and for this reason, they were recently implemented by Ye et al., 2024 and became accessible to all researchers. We think it is sensible to use the tools provided by Ye et al., 2024 instead of redoing the programming. In addition, in our work, we further construct confidence intervals through linear programming, which requires much implementational work and computation as well, and forms the most essential computational component of the paper. ___\#5.___ _Please refer to ... not right candidates._ **Response** This is related to your \#2. Please also see our response there. We hope to have your understanding of the status of the current literature and our situation. ___\#6.___ _The paper is largely ... wrt broader literature._ **Response** This is related to your \#4. Please also see our response there. Indeed, in this area, BBSE, RLLS and MLLS, which were developed in 2018-2020, are still the baseline methods today. Looking back, indeed very little advancement in this area has been made, especially in terms of the exact results under finite-sample situation. This is partially because the problem is not easy to solve. The situation also provided us the opportunity to make some important contributions in terms of providing a tight confidence set in finite sample situations. ___\#7.___ _Positives ... and motivation._ **Response** We thank you for recognizing the positives of the paper. Indeed, we work on estimating and inference of the importance weights in the label shift framework, which is a subfield in the more general distribution shift domain. We provide a very different take to this problem which enables us to obtain the tight confidence intervals in finite samples. We also adopt a Bayesian approach which enables to incorporate the multiple classifiers. In the revision, we better motivated the problem and added more background information and literature to help providing a bigger picture. We have addressed your individual comments. Please see \#1-\#6.
Summary: The paper addresses the label shift problem. Label shift occurs when label distributions differ between source and target domains. The authors propose EBMaC, a method combining empirical Bayes and matrix constraints. Traditional methods assume multinomial distributions. EBMaC uses hierarchical models for greater flexibility. It accounts explicitly for overdispersion. The authors construct exact confidence sets for the confusion matrix and predicted label distributions. They introduce linear programming for tighter confidence intervals. These intervals are smaller than previous methods. Confidence regions for importance weights are also tighter. The method improves prediction accuracy. EBMaC handles multiple classifiers simultaneously. Traditional methods use Gaussian elimination, but that leads to large confidence sets. EBMaC avoids this issue with linear programming. The method outperforms existing approaches numerically. The paper evaluates EBMaC using MNIST, CIFAR-10, and CIFAR-100 data. Empirical results demonstrate superior performance. The confidence sets by EBMaC are significantly tighter. Especially for difficult classification tasks, improvements are noticeable. EBMaC's hierarchical model captures variance effectively. It estimates parameters with empirical Bayes. The method yields finite-sample guarantees. EBMaC constructs conformal and PAC prediction sets. These sets have robust theoretical backing. Empirical tests show strong improvement. EBMaC consistently outperforms baseline methods. When classifiers have poor accuracy, EBMaC still performs well. Linear programming is key to EBMaC’s success. The method is statistically rigorous and computationally efficient. EBMaC sets new benchmarks for label shift estimation and prediction. Claims And Evidence: The authors claim that EBMaC significantly improves label shift estimation. Existing methods produce loose confidence intervals. EBMaC generates tighter intervals using empirical Bayes and linear programming. The paper claims empirical Bayes accounts effectively for overdispersion. Numerical results from MNIST, CIFAR-10, and CIFAR-100 support this. Variance-mean ratios confirm overdispersion occurs in practice. EBMaC adapts well to this phenomenon. Linear programming produces tighter confidence regions than Gaussian elimination. The authors demonstrate this with statistical tests. EBMaC intervals are consistently smaller, especially when classifiers perform poorly. Confidence regions from EBMaC have exact finite-sample guarantees. Theoretical proofs back this claim rigorously. EBMaC yields smaller prediction sets in downstream tasks. The authors provide both conformal and PAC prediction guarantees. Experiments validate these claims. The PAC prediction sets from EBMaC are provably smaller. Statistical tests show confidence sets from EBMaC significantly outperform existing methods. When classifiers have high accuracy, improvements are moderate. With poor classifiers, improvements are substantial. Results from CIFAR-100 highlight this point clearly. EBMaC handles multiple classifiers simultaneously. Empirical results confirm this increases robustness. Claims about tighter confidence intervals are supported by detailed numerical evidence. The authors show EBMaC achieves lower mean squared errors. Experiments systematically vary data size and label imbalance. Results are robust across various conditions. EBMaC consistently provides superior estimation and inference. The paper rigorously proves that linear constraints produce optimal sets. EBMaC thus represents a significant methodological advance. Methods And Evaluation Criteria: Yes, see above. Theoretical Claims: no errors found Experimental Designs Or Analyses: yes, see above. Supplementary Material: I did not review the supplement. Relation To Broader Scientific Literature: Distribution shift is widely studied in machine learning. Two main forms are covariate shift and label shift. This paper addresses label shift specifically. It builds directly on previous confusion-matrix-based approaches. Earlier work includes BBSE by Lipton and RLLS by Azizzadenesheli. Those methods pioneered linear relationships between confusion matrices and predicted labels. EBMaC extends this literature with empirical Bayes modeling. Empirical Bayes was not previously applied to label shift. EBMaC also improves upon Gaussian elimination approaches recently introduced by Si et al. The authors contrast EBMaC against maximum-likelihood-based methods like MLLS. EBMaC offers improvements in accuracy and computational stability. Prior literature often relies on asymptotic results. EBMaC provides finite-sample theoretical guarantees. This is a key theoretical improvement over previous methods. Conformal prediction methods from Podkopaev and Ramdas are also referenced. EBMaC incorporates conformal methods explicitly. PAC prediction frameworks from Valiant and recent extensions by Park et al. and Si et al. are relevant. EBMaC further tightens PAC prediction sets. The paper references established neural network calibration studies from Guo et al. It highlights how calibration affects label shift estimation. EBMaC thus combines diverse strands of literature. Empirical Bayesian methods come originally from statistical modeling literature. Linear programming techniques come from operations research and optimization literature. EBMaC effectively unifies statistical, computational, and optimization literatures. Empirical results confirm significant improvements. The paper is thus a clear methodological advancement in label shift estimation. Its innovations are well-integrated with existing scientific research. Essential References Not Discussed: The paper misses some key references. In fact, I consider the article's handling of related work the weakest part of the article. The paper is missing a related work section entirely. Below are some references that I encourage the authors to discuss - **Moreno-Torres et al. (2012)** *"A unifying view on dataset shift in classification."* Provides foundational definitions and distinctions among different types of dataset shift, including label shift. This foundational literature is relevant but missing in the current paper. - **Zhang et al. (2013)** *"Domain adaptation under target and conditional shift."* Addresses distribution shifts, including label shift, offering theoretical insights and practical methods directly relevant to EBMaC. - **Scott (2015)** *"A rate of convergence for mixture proportion estimation, with application to learning from noisy labels."* Presents statistical theory related to importance-weight estimation under shifted or noisy labels, closely aligned with EBMaC’s theoretical contributions. - **Reddi et al. (2015)** *"Doubly Robust Covariate Shift Correction."* Although focused primarily on covariate shift, introduces robust estimation methods methodologically similar to EBMaC’s approach. - **Singh et al. (2024)** *"Domain generalisation via imprecise learning."* Investigates distribution shifts and robust learning approaches, providing relevant theoretical background and practical methods complementary to EBMaC. - **Nalenz et al. (2024)** *"Learning de-biased regression trees and forests from complex samples."* Focuses on correcting bias in learned models from shifted or complex distributions. Its methodological insights into robust estimation are conceptually related to EBMaC. - **Rodemann et al. (2023)** *"Approximately Bayes-optimal pseudo-label selection."* Develops Bayesian-inspired pseudo-labeling approaches with theoretical guarantees. This work offers relevant insights into Bayesian methodology related to EBMaC’s empirical Bayes framework. Other Strengths And Weaknesses: none Other Comments Or Suggestions: typos that I found: "Multinomial model" (abstract section) Original: "permited" Correct: "permitted" "Empirical results" (introduction) Original: "In finite samples, BBSE and RLLS relies" Correct: "rely" "Section 3.1.2" (Estimation of Hyperparameters) Original: "The partial derivative of with respect to αs,k is" Correct: "The partial derivative with respect to αs,k is" Questions For Authors: see related work above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ___\#1.___ _The paper misses some key references. In fact, I consider the article's handling of related work the weakest part of the article._ **Response** Thank you for pointing this out. Originally we only included references in label shift which is the topic of this paper. We agree with you that label shift belongs to a more general distribution shift context, and we now cited more general references. In the revision, we have added all the references you suggested and provided discussions on them. Specifically, we added the following. ___Importance weights in distribution shift___ Estimation and inference of importance weights also arise in more general distribution shift problems. Using the ratio of densities between the source and target distributions, Moreno-Torres et al. (2012) demonstrated how distribution shift problems can be defined and classified based on the causal relationship between $X$ and $Y$ . The problem was further explored in Zhang et al. (2013), where importance weights were estimated using quadratic programming and kernel mean matching techniques. Distribution shift is closely related to the mixture proportion estimation problem, as the mixture proportion of the source distribution required to form the target distribution corresponds to the infimum of importance weights (Blanchard et al., 2010). Furthermore, Scott (2015) offers a theoretical perspective on estimating the mixture proportion within this framework. ___Robust estimation___ EBMaC incorporates multiple classifiers via Bayesian modeling, even when some of the classifiers do not perform well. This is conceptually related to robustness consideration. Under covariate shift setting, Reddi et al. (2015) proposed a fundamental doubly robust procedure of estimating the conditional mean of labels in the target distribution, where either the importance weights or the conditional mean from source data may be incorrectly estimated. Nalenz et al. (2024) discussed constructing a random forest derived from a complex sample, where each observation has a different weight of being sampled. ___Bayesian method___ Although Empirical Bayesian approach has not been fully explored within the machine learning framework, there exist Bayesian hierarchical modeling methods in this domain. Rodemann et al. (2023) examined a semi-supervised learning setting to construct the pseudo-labels for unlabeled data in the presence of a prior distribution by approximating the increased marginal likelihood. ___Model averaging___ EBMaC serves as a model averaging or aggregation method by combining multiple classifiers within a Bayesian hierarchical modeling. In a similar vein, Singh et al. (2024) investigated imprecise learning, where a learner trains the model using data drawn from a set of incorrect distributions. The risk levels determine how risk functions are aggregated from multiple distributions. When the risk levels are assumed to be independently sampled, they demonstrated that the aggregated risk asymptotically converges to the risk of a Pareto optimal model. ___\#2.___ _Typos_ **Response** We only detected the third typo you pointed out and we have corrected it in the revision. We also went over the entire paper multiple times and corrected all other typos we detected. We thank you for the careful reading. ___\#3.___ _The paper misses some key references. In fact, I consider the article's handling of related work the weakest part of the article. The paper is missing a related work section entirely. Below are some references that I encourage the authors to discuss_ _Here is the literature list formatted consistently:..._ **Response** We have included all these references and added discussions on them. Please see our response to your \#1. Thank you. --- Rebuttal Comment 1.1: Comment: Thanks for carefully addressing all the points raised in my review. I appreciate the thorough revisions you've made, particularly the substantial improvements to the related work section. I beliebe these updates have significantly strengthened the paper. Based on these revisions, I am pleased to increase my recommendation score. Thank you again for your diligent and thoughtful response. --- Reply to Comment 1.1.1: Comment: We appreciate your engagement in the review process.
null
null
null
null
null
null
Improving Consistency Models with Generator-Augmented Flows
Accept (spotlight poster)
Summary: The paper studies some theoretical aspects of the Consistency models (in particular, the consistency training regime). As the practical contribution, the authors propose to augment consistency training with so-called Generator-Augmented flows. The idea is to substitute the basic independent/mini-batch OT coupling used in consistency training with more sophisticated coupling which (typically) depends on the trained consistency generator itself. The authors provide some theoretical reasoning behind their methodology and conduct practical benchmarking. Claims And Evidence: Everything more-or-less ok Methods And Evaluation Criteria: The practical validation methodology seems to be ok Theoretical Claims: I went over the proof of Theorem 1 (not very attentive) - seems to be ok I only take a glance at the proof of Theorem 2, but (seems to) understand the idea behind it. Other theoretical results are not carefully checked. Some questions/issues/clarifications: 1. [C] Proof of theorem 2, lines 791-793. “Most importantly, the velocity term is not $\dot{\tilde{x}}_t$, but $\dot{\sigma}_t z$.” But, as I understand, $\dot{\tilde{x}}_t = \dot{\sigma}_t z$? Experimental Designs Or Analyses: I think the authors have a good suit of experiments. In particular, the authors make several theoretical claims/insights about their method and try to support these claims/insights practically. At the end of the manuscript, the authors compare their approach with baselines on several benchmark datasets. This is a good experimental design. Some questions/issues/clarifications: 1. [Q] I do not completely understand the experimental setup in sections 4.2.1 and 4.2.2. What was chosen as an “ideal consistency model” ($\overset{\circ}{f}$) in this experiment? I didn’t find experimental details. 2. [C] Figures 2, 3, x-axis. What is “timesteps $\sigma$”? Is it overall time T/ delta t? 3. [I] The gap in $\tilde{R}$’s values in Figure 2 does not seem to be significant. Do we really need to fight for such an improvement? 4. [I] Table 1: You report the FID metric for iCT-IC baseline. In the original work about iCT [Song and Dhariwal, 2024], the authors also provide FID score, see Table 2 in their paper, and their obtained FID values (~2.5-3). Why do you report ~ 2 times larger FID values? Also, how many function evaluation steps (1, 2, or more) was used to obtain the FID? Supplementary Material: I checked some proofs, took a glance at the other parts. Relation To Broader Scientific Literature: To the best of my knowledge, the idea with Generator-Augmented flow and its theoretical investigation are novel (at least, in the context of Consistency Models). Regarding the survey of Schrodinger bridge-related papers (lines 209-211, second column), some good references: 1. Solving schrodinger bridges via maximum likelihood. (Entropy) 2. Diffusion bridge mixture transports, schrodinger bridge problems and generative modeling. (JMLR) 3. Adversarial Schrödinger Bridge Matching (NeurIPS’24) Essential References Not Discussed: No Other Strengths And Weaknesses: Consistency models are known to be not very stable, especially when $\delta t$ step approaches zero and in continuous-time limit. I expect that the utilization of Generator flow should lead to even worse stability. Do the authors experience such problems in practice? Other Comments Or Suggestions: To be honest, I do not understand why the discrepancy between CT and CD losses matters. Ok, the losses $L_{CT}$ and $L_{CD}$ are a bit different (e.g., in continuous time-limit), but why is it an issue that needs to be addressed, as outlined in the abstract and contributions? As I understand, the aim of CT is to learn a generative model - why does the difference between $L_{CT}$ and $L_{CD}$ cause poorer performance of CT? I expect that the main problem (it is also mentioned in the text) is the variance of $L_{CT}$ estimator. Please elaborate more on this point. Questions For Authors: 1. How do the results of Theorem 1 correspond to those by [Song et. al]. What was lost in the analysis by [Song et. al.] that they “missed” the discrepancy between CT and CD? Some more elaborate discussion should be done on the comparison of the theoretical results. 2. Theorem 1, case $\alpha = 2$. How could it be the case that limiting gradients are equal, but the objectives themselves are not, and the difference between objectives depends on $\theta$. [Song et. al.] Consistency model. Some misprints/minors: 1. What is variance of data, $\sigma_d$, lines 147-148 2. In lines 116-117 it is noted that $v_t(x) = E[\dot{x}_t | x_t = x]$. From which it follows? Some references? 3. Line 254-255, second column $\tilde{R}_{batch-OT}$ should be with $t$ subscript. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive and constructive review. We address the concerns below. --- > **Theoretical Claims / 1.[C]** Lines 791-793: as I understand, $\dot{\tilde{x}}_t=\dot{\sigma}_t z$? $\dot{\tilde{x}}_t = \frac{d(\hat{x}_t)}{dt} + \dot{\sigma}_t z \neq \dot{\sigma}_t z$ where $\hat{x}\_t = G\_\theta(x\_t, \sigma\_t)$. However, a pair of training GC points starts from the same $\hat{x}_t$, which sets the velocity term to $\dot \sigma_t z$. --- > **Experimental Designs Or Analyses / 1.[Q]** I do not completely understand the experimental setup in sections 4.2.1 and 4.2.2. What was chosen as an "ideal consistency model”? The ideal consistency model in all experiments of Sections 4 and 5 is a consistency model trained in a standard manner on independent coupling (iCT-IC), as described in Section 5.1. We apologize for the confusion and will clarify this point in Section 4. --- > **Experimental Designs Or Analyses / 2.[C]** Figures 2, 3, x-axis. What is “timesteps $\sigma$”? It is the diffusion noise's standard deviation $\sigma_t$, as in $x_t = x_0 + \sigma_t z$. --- > **Experimental Designs Or Analyses / 3.[I]** The gap in $\tilde{R}$’s values in Figure 2 does not seem to be significant. Do we really need to fight for such an improvement? Note that on large timesteps, there is an order of magnitude of difference between IC and GC. Then, the relationship between improvement in $\tilde{R}$’s values and improvement in final sampling quality is unknown. We hypothesize that a small improvement of $\tilde{R}$ could lead to significant change on sampling quality, as our analysis suggests. --- > **Experimental Designs Or Analyses / 4.[I]** Table 1: In the original work about iCT [Song and Dhariwal, 2024], the authors report FID values ~ 2 smaller than yours. Why such a difference? Also, how many function evaluation steps was used to obtain the FID? We report FID with 1 NFE. The difference with iCT stems from the following differences: iCT uses 400k iterations with batch size 1024, while we use 100k with batch size 512. For fairness, all models were trained and evaluated in the same setting. Note that our ECT experiments follow the original setting from the authors, and that we present new results (see rebuttal to Reviewer PJfJ). --- > **Other Strengths And Weaknesses.** Does Generator flow worsens stability issues of consistency models? We observed instabilities in two settings: with mixed precision in the iCT setting; with large pre-trained models in the short training setting of ECT. In both cases, instabilities arose for both IC and GC. --- > **Other Comments Or Suggestions.** I do not understand *why the discrepancy between CT and CD losses matters*. [...] Why is it an issue that needs to be addressed, as outlined in the abstract and contributions? [...] I expect that the main problem (it is also mentioned in the text) is the variance of $L_{CT}$ estimator. *[emphasis ours]* CD provably approximates the target diffusion flow, ensuring convergence to the data distribution. We show that CT is not equivalent to CD due to a Jacobian regularization term (Theorem 1), which prevents CT from learning the true diffusion flow. This challenges widely accepted knowledge and suggests that the generator learned by CT may not converge to the data distribution. We hypothesize that addressing this discrepancy improves performance, as confirmed by our experiments. --- > **Questions / 1.** How do the results of Theorem 1 correspond to those by [Song et. al]. *What was lost in the analysis by [Song et. al.]* that they “missed” the discrepancy between CT and CD? *[emphasis ours]* The difference comes from two issues: - in their Theorem 2 ($L_{CT} = L_{CD}+o(\Delta T)$), the $o(\Delta T)$ is actually too large compared to the other term, and consequently the result is uninformative; - their Theorem 6 (limiting gradient equality) is stated with a general distance function, but the requirements on its Hessian restrict the theorem's validity to a quadratic loss. We will clarify those differences in the next version of the paper. --- > **Questions / 2.** Theorem 1, case $\alpha=2$. How could it be the case that limiting gradients are equal, but the objectives themselves are not, and the difference between objectives depends on $\theta$. The case $\alpha=2$ is particular: objectives are not equal, and gradients of the limit loss are not equal. However, limiting gradients are equal because of the interplay between the "stop-gradient trick" (whose effect disappears only in continuous time and not in the limit) and the quadratic loss (cf proof of Theorem 1, L668). This equality is in some sense a coincidence. --- > **Minor / 1.** What is $\sigma_d$? As in EDM (Karras et al., 2022) and later works, $\sigma_d=0.5$ in our experiments. --- > **Minor / 2.** It is noted that $v_t(x) = \mathbf{E}[\dot{x}_t | x_t =x]$. [...] Some references? This can be seen in Liu et al. (2023), Definition D.1.
Summary: # Update My criticism is minor, and I think the authors have adequately addressed it. As I have already given a positive review, I decided not to change my evaluation. # Old Summary Consistency model is a type of generative model that sends points along the sampling trajectory of a diffusion model to the last point (i.e., a noiseless data item) on that trajectory. There are well-known two training algorithms for training consistency models. The first is consistency distillation (CD), whichi involves distilling a teacher diffusion model. The second is consistency training (CT), which does not require a teacher. Models trained with CT generally underforms those trained by CD. The paper points out a theoretical difference betweem CD and CT, which may help explain the discrepancy in observed performance. In particular, it shows that the loss functions for CD and CT are different even at the limit of the time step size approaching zero. Moreover, when the distance function used in the losses are not the L2 distance, the gradients of the losses are different. This implies that CD and CT would likely lead to different converged models. The paper then proposes a way to improve CT by altering the points at which the model under training are evaluated during training. Let $f_\theta(x_t, t)$ denote the model under training. In CT, we would sample $x_* \sim p_{\mathrm{data}}$ and $z \sim \mathcal{N}(0,I)$, and a time step $t_i$. Then, we would try to minimize the distance between $f_\theta(x_* + t_i z, t_i)$ and $f_\theta(x_* + t_{i+1} z, t_{i+1})$. Assuming access to idealized consistency model $\overset{\circ}{f}(x_t, t)$, the paper proposes computing the final point $\tilde{x} = \overset{\circ}{f}(x_* + t_i z, t_i)$ and then minimizing the distance between $f_\theta(\tilde{x} + t_i z, t_i)$ and $f_\theta(\tilde{x} + t_{i+1} z, t_{i+1})$ instead. This modification is called "generator-augmented coupling" (GC). The paper mathematcally shows that GC has two benefits. The first is that GAF reduces a "proxy term" that measures the discrepancy between the loss functions of CT and CD. The second is that it reduces the expected L2 distance between a Gaussian noise $z$ and the point $f_\theta(z, 0)$ it is mapped to. However, GC as just stated is useful because, if we already have the idealized consistency model, we would not be interested in training a new consistency model in the first place. The paper offers two remedies. The first remedy is to train a consistency model with CT and use it as an approximation of $\overset{\circ}{f}$ in the next round of CT, but now with GC. The paper finds that this results in faster convergence in the 2nd round. Still, this approach is not practical as it makes the training process twice longer. The second remedy is to modify CT with probability $\mu$, CT-with-GC is used instead of the normal CT loss calculation method, and the model under training $f_\theta$ is used to approximate $\overset{\circ}{f}$ when doing GC. The paper found that, with $\mu = 0.5$ yielded models that achieved better scores (FID, KID, and IS) on the CIFAR-10, ImageNet 32x32, CelebA 64x64, and LSUN Church 64x64 datasets. Claims And Evidence: Experiment results were easy to understand. Improved FID scores on 4 datasets over vanilla CT and OT coupling methods clearly show that GC has merit. Methods And Evaluation Criteria: The benchmark datasets and the metrics used to evaluate trained models seem reasonable. However, using datasets with higher resolution images would make the experimental results stronger. Baselines to compare GC against (i.e., iCT-IC and iCT-OT) are reasonable as well. Theoretical Claims: I think the theoretical results are clearly stated. I also briefly checked the proof, and I found no glaring issues. However, I think there might be an issue with the use of the proxy term $\widetilde{\mathcal{R}}\_t = \mathbb{E}[\\| \dot{x}_t - v_t(x_t) \\|^2 ]$. The paper claims that it is "a proxy term for $\mathcal{R}(\theta)$" where $\mathcal{R}(\theta)$ captures the discrepancy between the CT loss and the CD loss but does not provide any mathematical justification for this. Honestly, I failed to find a clear connection between $\widetilde{\mathcal{R}}\_t$ and $\mathcal{R}(\theta)$ by myself. What I noticed is that $\widetilde{\mathcal{R}}\_t$ measures the variance of target velocity values in the conditional flow matching loss [1], and its being lower is better for training a flow matching model. However, we are training consistency models with with a different loss here, and it is unclear how target variance for the conditinal flow matching loss would relate to the consistency training loss. I encourage the authors to make this connection clearer. *Citation* * Lipman et al. "Flow Matching for Generative Modeling." 2022. Experimental Designs Or Analyses: I think the experiments are sound. Supplementary Material: I read through the proofs and found no major issues. I skimmed the sections on implementation details and found no issues either. Relation To Broader Scientific Literature: The paper provides new insights on why CT underforms CD, and provides a new way to improve the CT training algorithm. The authors rightly situate their approach among methods that use more sophisticated couplings between the data distributions and the noise distributions. Essential References Not Discussed: I believe the references are adequate. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: * "A most remarkable" -> "The most remarkable" * "consists in" -> "involves" * Dash in Latex is "---" instead of "--". Moreover, there should be no spaces before and after it. Hence, "velocity field -- thereby changing the target flow -- to reduce" should have been "velocity field---therby changing the target flow---to reduce" * "Two Diracs" -> "Two Dirac delta funcitons" or "Two impulses" * "$p_T \approx p(\sigma_T z)$" -> "$p_T \approx \mathcal{N}(0, \sigma_T^2 I)$" * "$\sigma_d$ the variance of the data" -> "\sigma_d is the standard deviation of the data" * "leaving the (asymptotic) quadratic loss" -> "except the version of the loss where the L2 distance function is used." * "While not fully solving the alignment issue either" -> remove "either" * "At $\mu = 0$ -> "For $\mu = 0$" * "we observe fast convergence but early divergence" -> "the FID score decreases faster than other configurations early in the training process, but it soons increases as training progresses further." Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed and positive assessment. We address the raised questions/weaknesses below. --- > The paper claims that [$\tilde{\mathcal{R}}_t$] is **"a proxy term for $\mathcal{R}(\theta)$"**, [...] but does not provide any mathematical justification for this. [...] I encourage the authors to make this connection clearer. *[emphasis ours]* We can make this connection clearer in the case of the quadratic loss ($\alpha=2$). Indeed, we can **bound the regularization term with the proxy term** thanks to the Jacobian's maximum singular value $s_{\text{max}}( \frac{\partial f_\theta}{\partial x} )$, which is bounded as typical networks are Lipschitz: $ \left\| \frac{\partial f\_\theta}{\partial x}(x\_t, \sigma_t) \left( \dot{x}\_t - v_t(x\_t) \right) \right\|^2 \leq \left\| \frac{\partial f\_\theta}{\partial x} \right\|^2 \|\dot{x}\_t - v\_t(x\_t) \|^2 \leq s^2\_{\text{max}} ( \frac{\partial f\_\theta}{\partial x} ) \|\dot{x}\_t - v\_t(x\_t) \|^2 $ Moreover, under some other assumptions on $f$, for example the fact that it is close to a scaling function for large $t$ (see Corrolary 2), if $f(x, \sigma_t) = \frac{\sigma_0}{\sigma_t} x$, then we would have: $\left\| \frac{\partial f_\theta}{\partial x}(x_t, \sigma_t) \left( \dot{x}_t - v_t(x_t) \right) \right\|^2 = (\frac{\sigma_0}{\sigma_t})^2 \|\dot{x}_t - v_t(x_t) \|^2$. We will include this discussion in the next version of our paper. --- **Typos.** We thank the reviewer for all the suggestions and the careful reading. We will modify the paper accordingly.
Summary: The paper analyzes and discusses a discrepancy between consistency distillation and consistency training in consistency models, and proposes a novel consistency training procedure to ameliorate the problem by leveraging the solution of the probability flow ODE learned by the model during training. ## Update after rebuttal I confirm that the approach seems promising and well motivated. However, the fact that the authors use different training settings makes it hard to compare the experimental results with the ones from related literature. Therefore I will keep my score as is. Claims And Evidence: Yes, the claims are supported by empirical evaluation i sections 4 and 5, see below. Methods And Evaluation Criteria: The experiments are suited to evaluate the proposed method. In particular, the datasets are standard benchmarks for conistency models, and the baselines iCT and ECT are the two most common baselines for consistency training. However, the experimental results in iCT are not convincing as they are quite far from the results reported in the original paper, and the model has been tested only on small settings for ECT. See 'Other strengths and weaknesses'. Theoretical Claims: I checked the correctness of the claims in section 4 and the proofs in appendix A. Experimental Designs Or Analyses: The authors provide extensive evaluation of the method compared to baselines using independent coupling and minibatch OT, showing the effective advantage of using the proposed method. The analysis in figures 2 and 3 confirm the claims made in section 4. Supplementary Material: I reviewed and appreciated the extended details provided in sections A, B, C and D. The hyperparameters for iCT differ from the one reported in the iCT paper, which could explain the difference in FID. Relation To Broader Scientific Literature: Consistency Models are a promising recent generative models for one-step generation. The proposed method exploits an improved training procedure, which is relevant both for advancing the understanding of the models and for practical applications. Essential References Not Discussed: I would suggest adding "Minimizing Trajectory Curvature of ODE-based Generative Models" from Sangyun Lee, Beomsu Kim, and Jong Chul Ye, to the discussion of coupling methods in section 3. Other Strengths And Weaknesses: As a weakness, the reported empirical results differ significantly from the original baseline iCT, which makes it hard to asses the validity of the proposed method. One problem is surely due to the lack of an open source official implementation of iCT, but the differences are a bit too large. However, the authors do include some experiments on ECT, of which the original implementation is open sourced, and there the results are convincing. I would suggest the authors to improve their iCT baseline, and to include more experiments based on ECT, as when using a pretrained network, the experiments are relatively fast and can be done with small batch size. A clear strength of the proposed method is the fact that it does not add any extra model parameters, and maintains the same sampling speed of equivalent baselines with independent coupling, for only a minor overhead in training time due to the extra forward pass. The model can potentially scale to much higher data dimensions and smaller batch size compared ot minibatch optimal transport. Other Comments Or Suggestions: I wonder if the authors have tried annealing the $\mu$ parameter from 0 to 1 during training. Intuitively, I would expect that at early training iterations, the model would benefit from using mostly the independent coupling, as the learned solution is probably still quite off, and then as training progresses and the sample quality improves, the model can rely more and more on the generator. Questions For Authors: - 1) How would you explain that using minibatch optimal transport on LSUN slightly outperforms the generator coupling? I would expect the minibatch OT coupling to work worse given the small batch size. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thorough and constructive review. We address the raised questions/weaknesses below. --- > **1.** How would you explain that using **minibatch optimal transport on LSUN slightly outperforms** the generator coupling? *[emphasis ours]* One of our contributions is to explain why using batch-OT in consistency models is a strong baseline via our theoretical analysis (cf. Sections 2.3 and 3). This further highlights the overall better performance of GC as suggested in Section 4.2.2 and confirmed in Table 1. We do not have a clear reason why LSUN is an exception. Note however that GC and OT are nearly in the same confidence interval of the FID on this dataset, while GC is significantly superior to batch-OT on CIFAR-10 and CelebA. --- > **2.** As a weakness, the reported empirical results **differ significantly from the original baseline iCT**. One problem is surely due to the lack of an open source official implementation of iCT. I would suggest the authors to improve their iCT baseline, and to **include more experiments based on ECT**, as the experiments are relatively fast and can be done with small batch size. *[emphasis ours]* The difference in FID values stems from the following differences: the original iCT baseline uses 400k iterations with a batch size of 1024, while we use 100k iterations with a batch size of 512. Still, **the comparison of all models in our paper is fair**: they are trained and evaluated in the same setting, yielding sound conclusions. Moreover, in the compute-efficient ECT setting, we are still superior to independent coupling. As recommended, we provide **new results in the ECT setting** on a larger-resolution ImageNet and FFQH (new dataset) that we will include in Section 5.3: ### ECT Results (FID) | Model | FFHQ-64 (Short) | FFHQ-64 (Long) | ImageNet-64 (Short) | ImageNet-64 (Long) | |-----------------------|----------------|---------------|----------------------|---------------------| | **ECT-IC** | 13.29 ± 0.11 | 9.68 ± 0.08 | 10.82 ± 0.18 | **5.84** ± 0.21 | | **ECT-GC (μ=0.3)** | **11.73** ± 0.09 | **8.51** ± 0.12 | **10.31** ± 0.22 | 6.39 ± 0.20 | Overall, GC is superior to IC on all datasets for iCT (4/4), and on 5/6 settings for ECT. Note that in the ECT setting, we could run the experiments with the **same training time** than in the original paper, and thus report similar FID values. This validates the benefits of using GC in practice. --- > **3.** I wonder if the authors have tried **annealing the parameter from $0$ to $1$** during training. Intuitively, I would expect that at early training iterations, the model would benefit from using mostly the independent coupling, as the learned solution is probably still quite off, and then as training progresses and the sample quality improves, the model can rely more and more on the generator. *[emphasis ours]* Indeed, this is an interesting question. We did experiment with this type of annealing which resulted in lower performance than with a fixed $\mu$. One explanation is that it does not interact well with the scheduling on the number of timesteps that is periodically increased during training: the generator is not optimal on newly introduced timesteps. --- > **4.** I would suggest adding "Minimizing Trajectory Curvature of ODE-based Generative Models" from Sangyun Lee, Beomsu Kim, and Jong Chul Ye, to the discussion of coupling methods in section 3. We thank the reviewer for this interesting reference which we will include in our discussion. In this work, the authors propose to learn an encoder from data to noise, and use this encoder as a way to construct a coupling when training a flow model. In comparison to our method, their algorithm is based on a joint training of the velocity field and of the noise encoder which then requires an additional network.
Summary: This paper examines consistency models, a technique for achieving single-step (or few-step) sampling in diffusion-based generative modeling and proposes to modify the data-noise coupling used during training so as to reduce the discrepancy between consistency training and consistency distillation. Specifically, the work introduces generator-augmented flows, which leverage an auxiliary generator (i.e., a consistency model proxy) to form “better” couplings between data points and noise. The authors show that this approach can reduce the variance in the velocity-field estimation (i.e., bridging the gap between training and distillation) and also decrease the transport cost from noise to data. The paper also clarifies that consistency training and consistency distillation do not coincide, even in the continuous-time limit, because the Monte Carlo estimate of the velocity field used in consistency training differs from the true velocity in distillation. The authors propose generator-augmented coupling (GC), in which an already-trained or concurrently-trained consistency model generates better endpoints for the intermediate noised data. This approach yields reduced discrepancy and lower transport cost compared to the standard “independent coupling” of data and noise. Claims And Evidence: The paper’s theoretical proofs and experiments are well aligned. Methods And Evaluation Criteria: The authors adopt mainstream metrics (FID, KID, IS) for generative image quality, consistent with typical evaluation practice in diffusion and consistency modeling. The theoretical framework is couched in well-established techniques (Wasserstein distances, “transport cost” definitions, analyzing velocity fields in PF-ODE form). As for the datasets and benchmarks, the chosen datasets (CIFAR-10, CelebA, ImageNet $32\times 32$, LSUN Church $64\times 64$) are standard, ensuring results are comparable to prior consistency and diffusion papers. Theoretical Claims: The derivations about the discrepancy between training and distillation (Theorem 1) are well grounded. The arguments for reduced transport cost are fairly straightforward but well explained (they rely on the fact that $c(t)$ can be shown to decrease under certain conditions). Experimental Designs Or Analyses: The authors train on well-known tasks with standard settings for consistency or diffusion-based generative modeling (e.g., iCT from Song & Dhariwal 2024). They measure the relevant metrics (FID, KID, IS) using recognized library code. The authors also compared their results with iCT-IC, iCT-OT and examine short vs. long training schedules. Supplementary Material: Yes, I reviewed the entire supplement. Relation To Broader Scientific Literature: The paper provides a refined theoretical view on the difference between consistency training and distillation, clarifying a claim about them “coinciding” in the continuous-time limit. This clarifies or partially corrects earlier suggestions by Song et al. that the discrepancy would vanish entirely. It builds on a line of research that tries to enhance data-noise coupling in generative modeling (OT-based or reflow-based methods). Its “generator-augmented flow” is a distinctive addition, conceptually simpler than full OT yet apparently effective. The authors have cited all the major literature, including consistency modeling (Song et al. 2023; Song & Dhariwal 2024) and prior works on alternative couplings (Pooladian et al., Dou et al.). Essential References Not Discussed: I don't think there are major references missing. Other Strengths And Weaknesses: Strengths: The authors systematically analyze the limiting behavior of consistency losses, which is rarely spelled out in prior literature. Generator-augmented flows are easy to implement and require no heavy OT solvers. More importantly, on standard datasets, the approach yields consistent improvements, beyond batch-OT. Weaknesses: 1. The results primarily cover moderate-scale $32\times 32$ or $64\times 64$ tasks. Additional experiments on $256\times 256$ or higher could strengthen claims of broad applicability. Can the authors reply to this, or add experiments on this? 2. The concept partly assumes an “ideal” or well-trained generator for the GC approach. The paper addresses this with “joint learning,” but theoretically that is more complicated, and some of the formal results rely on the generator’s accuracy. What do you think about this? 3. Could the approach generalize easily to other cost functions in the coupling—e.g., $L_1$ or robust divergences? Does the preference for quadratic cost (in the “transport cost” sense) strictly matter for performance in your experiments? Other Comments Or Suggestions: Please refer to the "weaknesses" section. Questions For Authors: Please refer to the "weaknesses" section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed and positive assessment. We address the raised questions/weaknesses below. --- > **1.** The experiments cover moderate-scale. **Could the authors add an experiment on $256\times256$**? *[emphasis ours]* We cannot conduct experiments on larger resolutions from scratch for computational reasons. Moreover, the only pre-trained diffusion models available in the compute-efficient ECT setting are on ImageNet $512 \times 512$, which is also too large given our resources. Still, we were able to obtain **new results on a larger-resolution ImageNet** ($64\times 64$ conditional) and a **new dataset** (FFHQ $64\times 64$) in the ECT setting that we will include in Section 5.3: ### ECT Results (FID) on ImageNet-64 and FFHQ-64 | Model | FFHQ-64 (Short) | FFHQ-64 (Long) | ImageNet-64 (Short) | ImageNet-64 (Long) | |-----------------------|----------------|---------------|----------------------|---------------------| | **ECT-IC** | 13.29 ± 0.11 | 9.68 ± 0.08 | 10.82 ± 0.18 | **5.84** ± 0.21 | | **ECT-GC (μ=0.3)** | **11.73** ± 0.09 | **8.51** ± 0.12 | **10.31** ± 0.22 | 6.39 ± 0.20 | --- > **2.** The concept partly assumes an “ideal” or well-trained generator for the GC approach. The paper addresses this with “joint learning,” but theoretically that is more complicated, and some of the formal results rely on the generator’s accuracy. What do you think about this? Indeed, our theoretical assumption requires an ideal generator to construct GC. Our joint learning approach allows to approximate this ideal generator and, when the generator is close enough to an ideal one, joint learning succeeds. Interestingly, we show that when the generator is not trained enough on IC, then GC does not work anymore (see Section C.1 and Figure 7 in Appendix). This proves the effectiveness of the joint learning approach: the approximation is good enough to construct GC trajectories. --- > **3.** Could the approach generalize easily to **other cost functions in the coupling**—e.g., $L_1$ or robust divergences? Does the preference for quadratic cost (in the “transport cost” sense) strictly matter for performance in your experiments? *[emphasis ours]* GC is **agnostic to all distance functions**, whether the one used in consistency model training (Eq. 4/6/15) or the transport cost used in the analysis in Section 4.2.2. Regarding the transport cost, the algorithm is not built to minimize this distance directly, as opposed to batch-OT. We did not study this further but intuitively expect GC to also minimize transport cost with regards to other cost functions. We would be happy to provide further clarification, should we have misinterpreted to the reviewer's question.
null
null
null
null
null
null
Shortcut-connected Expert Parallelism for Accelerating Mixture of Experts
Accept (poster)
Summary: The execution of mixture-of-experts model contains two all-to-all communication steps on the critical path of computation. The authors of this paper propose to use the activations before the attention layer of the current block as the input for the experts in the current block, which breaks the sequential dependency between the attention and expert layers, and opens the possibility to overlap the computation and communication. Experiments on the accuracy and inference time show the effectiveness of the new design. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: Yes, the experimental design is good in general. But it's better to include more modern designs like the ones used in deepseek v2 (more smaller experts). Supplementary Material: No Relation To Broader Scientific Literature: A new design to MoE architecture. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths** 1. The paper addresses an important problem: reducing communication overhead in the serving of MoE models. 2. By modifying the architecture to break the sequential dependency between attention and MoE layers, the authors enable overlapping computation and communication, effectively hiding communication overhead. 3. Experiments demonstrate that the proposed computation-communication overlap strategy in the new architecture is effective. 4. The paper is well-written and easy to follow. **Weaknesses** My primary concern is that the skip connection design lacks novelty. Additionally, the model quality requires further justification or experiments to substantiate that it's actually effective. Existing models such as GPT-J and GPT-NeoX already parallelize attention and feedforward networks, while architectures like Snowflake Arctic [1] employ a structure where the input to the self-attention layer is directly fed to the MoE layer - closely resembling this work’s proposal. However, current SOTA models (e.g., DeepSeek V3/R1, Llama 3.2) do not adopt this parallel structure, raising questions about whether the proposed architecture can achieve competitive accuracy/quality at scale. The experiments in this paper inadequately validate model quality, as the metrics in Table 2 fall significantly below those of publicly available models, including smaller ones. In general, altering model architecture for performance optimization but do not validate the final accuracy on large scale carry risks, as such changes may compromise final accuracy. The accuracy number of Snowflake Arctic [1] also shows that this parallel architecture might harm the model quality (when it compares with other models with similar size of parameters). While the paper is well-argued on the motivation and the system-side optimizations appear sound, I remain cautious about the final model quality of the new architecture. [1] https://www.snowflake.com/en/blog/arctic-open-efficient-foundation-language-models-snowflake/ Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > 1. It's better to include more modern designs like the ones used in deepseek v2 (more smaller experts). Given our constraints on available hardware, we cannot conduct experiments on large-scale MoE models like DeepSeek-V2 (236B). Nonetheless, we conduct experiments on the OLMoE model (7B), a recent MoE model that features more modern designs, incorporating the fine-grained expert architecture (more smaller experts) similar to DeepSeek-V2. We train both the original OLMoE and the OLMoE+ScMoE from scratch, using 40,000 samples from the TuluV3 dataset. As demonstrated in the following table, our proposed ScMoE architecture not only accelerates the training by 1.13x and inference by 1.19x, but also achieves a marginally higher average accuracy (43.77% vs 43.25%). | |Train Speedup|Inference Speedup|ARC-C|BoolQ|OBQA|RTE|WG|PIQA|AVG| |-|-|-|-|-|-|-|-|-|-| |OLMoE|1|1|24.91%|58.30%|30.00%|44.77%|49.64%|51.90%|43.25%| |OLMoE+ScMoE |1.13x|1.19x|25.51%|58.10%|30.80%|45.49%|49.25%|53.48%|43.77%| > 2. My primary concern is that the skip connection design lacks novelty. Additionally, the model quality requires further justification or experiments to substantiate that it's actually effective. Existing models such as GPT-J and GPT-NeoX already parallelize attention and feedforward networks, while architectures like Snowflake Arctic [1] employ a structure where the input to the self-attention layer is directly fed to the MoE layer - closely resembling this work’s proposal. (1) GPT-J and GPT-NeoX utilize parallel Attention and Feed-Forward Layers to address the All-Reduce communication overhead inherent in Tensor Parallelism, which is notably different from the All-to-All communication overhead encountered in Expert Parallelism. Distinct from existing methods, our proposed ScMoE architecture is specifically designed to minimize the All-to-All communication overhead in Expert Parallelism. (2) Regarding Snowflake Arctic, our work was conducted independently and concurrently with it, and it stands out in several distinct ways. Snowflake Arctic is primarily an industrial product and does not include any published papers or detailed analyses of their proposed Dense-MoE Hybrid Transformer architecture. In contrast, we thoroughly examine the effectiveness of Shortcut-connected MoE architectures and analyze the phenomena and reasons behind them, offering deeper insights for the research community. Furthermore, our proposed Shortcut-connected MoE demonstrates superior generalization across different MoE model designs (adapt to various MoE placement frequencies). It bears similarity to their Dense-MoE Hybrid Transformer architecture, specifically to our proposed ScMoE (Pos-1), which is more like a subset of our study. > 3. However, current SOTA models (e.g., DeepSeek V3/R1, Llama 3.2) do not adopt this parallel structure, raising questions about whether the proposed architecture can achieve competitive accuracy/quality at scale. The experiments in this paper inadequately validate model quality, as the metrics in Table 2 fall significantly below those of publicly available models, including smaller ones. In general, altering model architecture for performance optimization but do not validate the final accuracy on large scale carry risks, as such changes may compromise final accuracy. The accuracy number of Snowflake Arctic [1] also shows that this parallel architecture might harm the model quality (when it compares with other models with similar size of parameters). While the paper is well-argued on the motivation and the system-side optimizations appear sound, I remain cautious about the final model quality of the new architecture. (1) We propose the ScMoE architecture as a viable option for the backbone of MoE models. In line with the initial experiments conducted on previously proposed MoE architectures, such as standard MoE and shared-expert MoE, we train the ScMoE model from scratch to ensure a fair comparison. When trained under identical conditions, ScMoE demonstrates superior efficiency while achieving comparable accuracy, compared to both the standard MoE and shared-expert MoE methods. (2) Due to our limited devices, we are unable to train with pre-training datasets comprising 1 trillion tokens, as is common with existing industrial-level models. Consequently, our experimental models exhibit relatively lower accuracy, as we only train with datasets that are 1/1000th the size. Nevertheless, we offer theoretical analysis upon our empirical results to support the effectiveness of our methods, as discussed in Section 5 and Appendix A.1. (3) According to the provided document on Snowflake Arctic, this parallel architecture offers significant value for practical deployments, rather than solely impacting model quality negatively. The document states that "Arctic is more capable than other open source models trained with a similar compute budget." --- Rebuttal Comment 1.1: Comment: Thanks the authors for addressing my concerns. I agree that the parallel industrial work without publication should not be a blocker for this paper. Changing my score from 2 to 3. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful review and for improving the score. If you have any further concerns or questions, we would be more than happy to address them.
Summary: The paper proposes an new method for expert parallelism, which is a paradigm for distributed training and inference of large scale MoE models by dividing experts across multiple devices. The authors address the bottleneck of all-to-all communication between experts and present a new strategy that can overlap communication with computation. Given a model with every other layer being an MoE and each MoE having a shared expert, the shortcut-connected approach overlaps MLP and shared expert computation with the all-to-all communications of routed experts. This results in tangible speedups for MoE training and inference for GPT2-size models. Claims And Evidence: The main premise of reducing expert parallelism overhead is well motivated. The proposed method is clear and the paper shows convincing evidence that the proposed overlapping strategy is beneficial for improving throughput. However, the MoE architecture is fundamentally changed in this method. This leads to slightly decreased performance on pretraining benchmarks, and it is not clear whether this error scales for larger models. Methods And Evaluation Criteria: The evaluation section is very strong and uses sensible benchmarks. The most common pretraining benchmarks are reported along with validation loss. The paper also discusses the speedup of their method, which makes sense as the goal of their method was to reduce MoE distributed communication overhead. Theoretical Claims: I did not check for correctness of any proofs Experimental Designs Or Analyses: The experimental design makes sense. The authors apply their new MoE architecture for LM pretraining on multiple models, up to 7B parameters. I think the analysis can be stronger if all of the the results are consolidated across different model sizes. For example, what is the % speedup, peak GPU memory reduction, change in validation loss, change in pretraining benchmarks as a result of model size? Currently the evaluation is sparse as the benchmarks in Table 2 are reported for two models and the latency + peak memory analysis in Figure 13 uses two different models. Because the paper is fundamentally modifying the MoE architecture it would be more convincing to see all of the evaluations for each model. Supplementary Material: I did not review the supplementary material Relation To Broader Scientific Literature: The authors address the bottlenecks expert parallelism, which is becoming an increasingly important paradigm for training and deploying large scale MoEs. The results in this paper have the potential to speed up MoE training in a broad range of settings, which makes this work widely applicable in the field of MoE research. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The method seems limited to architectures that use shared experts and alternating MoE layers with typical MLPs. While this is a commonly used setup, I think the results would be more convincing if the approach could be extended to architectures that don't follow these exact constraints. The approach also fundamentally modifies the MoE architecture by introducing these shortcut connections. I assume this means that ScMoE cannot be applied to speed up MoE inference for off-the-shelf pretrained models, which limits its wider adoption. Other Comments Or Suggestions: N/A Questions For Authors: 1. How do the validation loss results in Figure 10 compare with a regular MoE as the baseline? 2. How does the overall speedup and benchmark performance scale with model size? Do you expect that the ScMoE approach becomes more viable for larger scale pretraining? 3. Can this shortcut connection approach be applied to already pretrained models? What does the performance look like in such cases? Does your approach necessarily require pretraining a model from scratch to use the shortcut connections 4. How do you think your results could generalize to other MoE models that either do not use shared experts, or have an MoE for every transformer layer? I think the results of this paper are very interesting with strong impact on the field of MoE research in general. However, they currently seem limited to a specific architecture setup, and I am not convinced if this is viable for MoE training at larger scale. The responses to these questions would increase my confidence in recommending this paper for acceptance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > 1. I think the analysis can be stronger if all of the the results are consolidated across different model sizes. We have summarized and supplemented evaluations of GPT models across various sizes to better demonstrate the impact of model size as a standalone variable. The following table presents the speedup, zero-shot perplexity, peak GPU memory reduction, and MoE block latency reduction of ScMoE in comparison to the standard top-2 MoE. The results demonstrate that our proposed ScMoE consistently achieves improvements across various model sizes. | |Model Size|Train Speedup|Inference Speedup|Perplexity (ScMoE / Standard MoE) |Peak GPU Memory Usage Reduction|MoE Block Latency Reduction| |-|-|-|-|-|-|-| |GPT2-MoE-Small| 323M|1.15x|1.22x|29.10/31.60|-53%|-41%| |GPT2-MoE-Medium|1.7B|1.12x|1.17x|17.62/19.18|-50%|-33%| |GPT2-MoE-XL|4.1B|1.12x|1.18x|16.46/17.52|-60%|-18%| > 2. How do the validation loss results in Figure 10 compare with a regular MoE as the baseline? We observed that the loss curve of ScMoE (3.2629 at the final point) demonstrates faster convergence and a lower final loss compared to the standard top-2 MoE (3.3157 at the final point) and shared-expert MoE (3.2974 at last). > 3. How does the overall speedup and benchmark performance scale with model size? Do you expect that the ScMoE approach becomes more viable for larger scale pretraining? (1) To illustrate the effectiveness of speedup in larger-scale scenarios, we conducted additional simulations, the results of which are detailed in **the table "Simulation of larger-scale model training" in our response to Reviewer GE4M**. The findings suggest that the relative ratio of communication to computation duration, influenced by various training and model configurations, is crucial for impacting speedup. Optimal speedup is achieved when these durations are balanced. Despite these variations, ScMoE consistently achieves acceleration. (2) Due to our limited devices, we face challenges in conducting experiments on large-scale MoE models. Nevertheless, we offer theoretical analysis to support the performance of our methods, as discussed in Section 5 and Appendix A.1. Additionally, based on the findings presented in Question 1, we anticipate that the disparity in accuracy between ScMoE and existing MoE architectures will diminish as both model size and the volume of training data increase. > 4. Can this shortcut connection approach be applied to already pretrained models? What does the performance look like in such cases? Does your approach necessarily require pretraining a model from scratch to use the shortcut connections. (1) We propose the ScMoE architecture as a viable option for the backbone of MoE models. In line with the initial experiments conducted on previously proposed MoE architectures, such as standard MoE and shared-expert MoE, we train the ScMoE model from scratch to ensure a fair comparison. (2) The ScMoE models can be constructed using pre-trained shared-expert MoE models by modifying the inputs of the MoE module. Our experiments with this approach showed a decrease in accuracy, indicating a need for further refinements in the fine-tuning process to preserve accuracy. (3) Additionally, it is possible to construct and train a ScMoE model based on a pretrained dense model, employing techniques similar to sparse upcycling, which have been applied to standard MoE models. > 5. The method seems limited to architectures that use shared experts and alternating MoE layers with typical MLPs. How do you think your results could generalize to other MoE models that either do not use shared experts, or have an MoE for every transformer layer? (1) We would like to clarify that we have already conducted experiments on the current mainstream MoE structure of placing MoE module in every Transformer block, as presented by the LLaMA2-MoE experiments in Section 4.2.2, Table 2 and Table 7. Specifically, our LLaMA2-MoE experiments utilize ScMoE (Pos-1) and incorporates the MoE module in every Transformer block, which overlaps the communication and computation within the each Transformer block. (2) Our work focuses on integrating shortcut connections into the MoE architecture to enhance efficiency, which does not rely on the shared-expert MoE. As discussed in Appendix A.2, our DGMoE, which incorporates shortcut connection without shared expert, also offers improvements over the top-2 MoE. Furthermore, ScMoE functions alongside both the standard MoE and shared-expert MoE architectures. When designing models, practitioners have the option to choose from these three architectures. (3) We conduct experiments on the OLMoE model, which originally employs standard MoE without shared experts at each layer. As shown in **the table "Experimental results on integrating ScMoE architecture into the OLMoE model" in response to Reviewer GE4M**, our proposed ScMoE architecture not only achieves acceleration, but also achieves a higher average accuracy.
Summary: This paper presents ScMoE to enhance the computational efficiency of Mixture-of-Experts (MoE) models. By incorporating a shortcut connection that integrates information from the preceding layer with the current layer's computations, ScMoE introduces a concurrent processing mechanism which allows for overlapping communication with computation through the parallel execution of two independent streams: one processing the current layer's input and another propagating residual from the previous layer. As a result, ScMoE significantly accelerates both the training and inference phases by minimizing communication latency while maintaining or even enhancing model quality. Claims And Evidence: Communication and computation can overlap fully depending greatly on hardware and system conditions. Methods And Evaluation Criteria: The claim that model quality is always maintained or improved is only shown on small models. Sometimes the accuracy improvements are very small, and the results might not work for larger models (like OLMoE) or different tasks. Theoretical Claims: The theoretical claims presented in the paper are solid. Experimental Designs Or Analyses: The experiments are limited to only small models, which may limit the generalizability of the results to other models or tasks. Supplementary Material: I only reviewed the experiments presented in the supplementary material. Relation To Broader Scientific Literature: The authors demonstrate that neighboring Transformer layers share significant feature similarity, a pattern consistent with prior research. Essential References Not Discussed: The references are comprehensive and well-organized. Other Strengths And Weaknesses: The proposed method is simple, which is a clear advantage. However, it relies heavily on the assumption that features in neighboring layers are similar, which is often true. Yet, the supplementary materials show that this similarity is not clear in the OLMoE model. In addition, the authors mention that ScMoE-Pos2 does not perform well, which makes me worry about how well the method can work in different cases. Other Comments Or Suggestions: The authors have only conducted experiments on small models for image classification and language tasks, so additional experiments on larger and more powerful models may be necessary. Questions For Authors: Does the proposed method fail if the features change dramatically between neighboring layers? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > 1. The claim that model quality is always maintained or improved is only shown on small models. Sometimes the accuracy improvements are very small, and the results might not work for larger models (like OLMoE) or different tasks. The experiments are limited to only small models, which may limit the generalizability of the results to other models or tasks. The authors have only conducted experiments on small models for image classification and language tasks, so additional experiments on larger and more powerful models may be necessary. (1) Given our constraints on available hardware, we encounter difficulties in conducting empirical experiments on large-scale, industrial-level MoE models. Nevertheless, we perform additional experiments on the OLMoE model [1] and conduct some different evaluation tasks. The detailed experimental configurations are introduced in our response to Reviewer GE4M. As shown in the following table, our proposed ScMoE architecture not only accelerates the training by 1.13x and inference by 1.19x, but also achieves a marginally higher average accuracy (43.77% vs 43.25%). | | Train Speedup | Inference Speedup | ARC-C | BoolQ | OBQA | RTE | WG | PIQA | AVG | | ----------- | ------------- | ----------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | | OLMoE | 1 | 1 | 24.91% | 58.30% | 30.00% | 44.77% | 49.64% | 51.90% | 43.25% | | OLMoE+ScMoE | 1.13x | 1.19x | 25.51% | 58.10% | 30.80% | 45.49% | 49.25% | 53.48% | 43.77% | (2) Beyond the focus on accuracy, we also simulations to validate the effectiveness of speedup in larger-scale scenarios, as shown in **the table "Simulation of larger-scale model training" in our response to Reviewer GE4M**. Despite differences in communication and computation durations across configurations, our proposed ScMoE consistently proves to be effective. > 2. It relies heavily on the assumption that features in neighboring layers are similar, which is often true. Yet, the supplementary materials show that this similarity is not clear in the OLMoE model. In addition, the authors mention that ScMoE-Pos2 does not perform well, which makes me worry about how well the method can work in different cases. Does the proposed method fail if the features change dramatically between neighboring layers? (1) Recent studies [2,3,4,5] have commonly observed the high similarity between features in neighboring layers as a characteristic of Transformer-based LLMs, which can be utilized to improve the efficiency of LLM operations. (2) Our supplementary materials on the OLMoE model further corroborate that the observed feature similarity is within an expected range and aligns with prevalent findings in the field. As illustrated in Figure 15, most data points near the diagonal line are yellow, indicating nearly 100% similarity. Only the output features of 1A and 1M exhibit comparatively lower similarity, yet maintain a significant 50%. (3) We would like to clarify that the statement "ScMoE-Pos2 does not perform well" was made in a comparative context. In fact, the loss differences among ScMoE-Pos1 (3.2615), ScMoE-Pos2 (3.2818), and ScMoE-Pos2-L0Pos1 (3.2629) are relatively minor. Moreover, the loss values for the standard top-2 MoE (3.3157) and shared-expert MoE (3.2974) are higher than those of the three ScMoE configurations. This indicates that despite variations in feature similarity across certain layers, ScMoE remains effective. Additionally, we will include a detailed comparison of these loss curves in the appendix for further clarity. (4) Our strategy for selecting shortcut-connected positions, detailed in Section 5.1.3, is designed to achieve optimal accuracy. Additionally, based on these observations of similarity, we can adapt this strategy for subsequent MoE model design by opting not to use ScMoE in the first two Transformer layers. Similarly, DeepSeek-MoE [6] chooses not to apply the MoE module in the first Transformer layer. **Reference** [1] Muennighoff, Niklas, et al. "Olmoe: Open mixture-of-experts language models." *arXiv preprint arXiv:2409.02060* (2024). [2] Sun, Qi, et al. "Transformer layers as painters." arXiv preprint arXiv:2407.09298 (2024). [3] He, Shwai, et al. "What matters in transformers? not all attention is needed." arXiv preprint arXiv:2406.15786 (2024). [4] Men, Xin, et al. "Shortgpt: Layers in large language models are more redundant than you expect." *arXiv preprint arXiv:2403.03853* (2024). [5] Gromov, Andrey, et al. "The unreasonable ineffectiveness of the deeper layers." *arXiv preprint arXiv:2403.17887* (2024). [6] Dai, Damai, et al. "Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models." *arXiv preprint arXiv:2401.06066* (2024).
Summary: This paper proposes Shortcut-connected MoE (ScMoE) to reduce the All-to-All communication bottleneck in expert parallelism of MoE model training. Traditional MoE models suffer from high All-to-All communication costs due to dependencies between computation and communication. ScMoE solves this by using a shortcut connection that allows the previous layer’s representations to be processed in parallel with the current layer. It replaces top-2 gating with a shared expert for the current layer and a top-1 expert for the previous layer, enabling full overlap between computation and communication. The authors also propose an adaptive overlapping scheduling strategy, achieving 1.49x faster training and 1.82x faster inference while keeping model accuracy comparable or slightly better than standard top-2 MoE. Experiments demonstrate that ScMoE maintains strong performance while improving efficiency. The paper also provides theoretical analysis proving stable gradient propagation. Overall, ScMoE co-designs algorithm and infrastructure, and is a novel and effective method for accelerating MoE training. --- ### update after rebuttal After reading the rebuttal, my major concerns have been addressed. Claims And Evidence: Most of the claims are well-supported by experiments and analysis. 1. Decoupling and overlapping communication and computation via shortcut connection: The shortcut connection proposed by this paper does allow the overlapping between computation and communication, as demonstrated in Figure 5 and Figure 6. Experiments also confirm that ScMoE overlaps up to 100% of communication time, reducing the impact of All-to-All bottlenecks. 2. Maintaining Model Quality: The authors claim that the shortcut connection maintains the model quality. This is supported by the experiments, where ScMoE achieves 79.3% Top-1 accuracy on ImageNet, similar to top-2 MoE, and performs well in NLP tasks. 3. Generalization Capability: The experiments shows that ScMoE improves performance on different hardwares (A30-PCIe with high communication ratio, and A800-NVLink with low communication ratio), on different workloads (vision tasks, language tasks). --- However, I still have the following concerns: 1. The author states that the standard MoE model structure consists of Block-MLP and Block-MoE, as shown in Figure 2, which aligns with the design in DeepSpeed-MoE. However, I am afraid such a structure is out-dated, and in most mainstream MoE models (e.g., DeepSeekMoE, DeepSeek-V3), MoE layers are predominantly arranged sequentially rather than interspersed with MLP layers. I am curious whether ScMoE is compatible with this mainstream MoE architecture. Methods And Evaluation Criteria: The proposed ScMoE architecture and adaptive scheduling strategy effectively target the MoE communication bottleneck. The evaluation criteria makes sence, covering: 1. Benchmarks: Vision tasks (ImageNet) and NLP tasks (GPT-2/LLaMA2 MoE, with model size up to 7B). 2. Metrics: Model quality and accuracy, training efficiency, inference efficiency. 3. Baselines: Standard top-2 MoE (DeepSpeed MoE), Shared-expert MoE, and Standard top-1 MoE for comparison. 4. Hardwards: A30-PCIe with high communication ratio, and A800-NVLink with low communication ratio. For A800, 8 GPUs within 1 node and 16 GPUs across 2 nodes are evaluated. Theoretical Claims: The appendix provides a theoretical analysis of gradient propagation, showing that ScMoE does not affect model convergence. The authors prove that: 1. The shortcut connection preserves gradient flow between layers. 2. The shared expert and expert gating maintain stable optimization dynamics. Experimental Designs Or Analyses: The experiments are well-controlled: 1. Diverse hardware setups: 8×A30 PCIe (high communication overhead) and 8×A800 and 16xA800 NVLink (low communication overhead). 2. Multiple Workloads: SwinV2-MoE-S (vision), GPT-2/LLaMA2-MoE (NLP). 3. Ablation studies: Different shortcut placements (Pos-1/2/3) to find the best trade-off between speed and accuracy. --- However, I suppose the following evaluation will strengthen the analysis: 1. Evaluation on Mainstream MoE Structure: The workloads used in ScMoE is based on DeepSpeed-MoE, where MoE layers are interspersed with MLP layers. However, I think the current mainstream MoE structure (where MoE layers are predominantly arranged sequentially) should also be evaluated, e.g., DeepSeekMoE. 2. Stronger Baselines: It would be valuable to compare ScMoE with other advanced All-to-All optimization techniques, such as hierarchical All-to-All and pipelining. Additionally, DeepSeek recently open-sourced DeepEP, a high-performance expert parallelism library. While I understand that ScMoE is either concurrent with or an earlier work relative to DeepEP, including an analysis of their differences and potential complementarities would be beneficial. 3. Larger Models: Since ScMoE modifies the model structure, it may impact model convergence and overall quality. While the authors provide empirical results on models up to 7B and offer theoretical analysis, additional experiments on larger models (e.g., 13B, 30B) would strengthen the evaluation. 4. Larger Multi-node Cluster Analysis: I am concerned of the performance on larger multi-mode clusters. I understand authors may not be able to obtain larger scale multi-mode cluster (e.g., 64 A800 GPUs). However, it would be beneficial if authors provide analysis or simulated evaluation of the theoretical performance of ScMoE on larger clusters. Supplementary Material: The appendix contains useful additional insights and strengthens the paper’s conclusions. 1. DoubleGating MoE (DGMoE): A variant with two independent top-1 experts, but it suffers from redundant expert selection. 2. Gradient theoretical analysis: Proof that ScMoE ensures stable training. 3. Expert offloading: A memory-saving strategy that reduces GPU usage by up to 60%. Relation To Broader Scientific Literature: ScMoE builds upon existing MoE parallelism research, including: 1. Expert Parallelism (Lepikhin et al., 2021), which introduces All-to-All communication for MoE. 2. Shared Expert MoE (Rajbhandari et al., 2022), which uses a fixed expert for every layer, reducing All-to-All communication. 3. Pre-gated MoE, which is optimized MoE inference by preselecting experts. ScMoE combines shared experts with shortcut connections to further reduce communication overhead, making it a novel extension of these works. Essential References Not Discussed: Here are several related works: - Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity [JMLR’22], which is the first to use top-1 gating for efficiency. - DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models [Arxiv’24], which is one of the mainstream MoE model structures. Other Strengths And Weaknesses: Strengths: 1. Significant speedup without accuracy loss. 2. Well-structured experiments, with multiple baselines and ablation studies. 3. Good clarity and organization, making complex ideas easy to understand. Weaknesses: 1. Incremental improvement: ScMoE mainly extends shared-expert MoE with shortcut connections. 2. Model structure may be out-dated: DeepSpeed-MoE, where MoE layers are interspersed with MLP layers, is not the mainstream MoE structure currently. 3. Lack of evaluation on larger-scale models and larger-scale multi-node cluster. Other Comments Or Suggestions: See the questions for authors. Questions For Authors: Experiment Related: 1. Are GPT-2/LLaMA2 MoE models trained from scratch or fine-tuned from a dense model? 2. The experiments test single-node (8 GPUs) and two-node (16 GPUs) training. How does ScMoE scale to hundreds or thousands of GPUs? More analysis of ScMoE on multi-mode clusters would be helpful. 3. Are the experiments conducted on FP32 precision of BF16 precision? --- Overlap Related: 1. Figure 6 provides a visual timeline comparison of different MoE architectures. Would it be possible to quantify and report exact percentages of communication hidden by computation in Table 1 or 2? 2. If we increase the number of experts per MoE layer, how does the overlap ratio change? Does a larger number of experts negatively impact ScMoE’s effectiveness? 3. What happens when communication is much slower than computation? Can ScMoE still provide significant speedup? --- Model Quality Related: 1. Do you observe any convergence differences between ScMoE and top-2 MoE during training? If so, does ScMoE require different hyperparameters (learning rate, warm-up schedule, etc.)? 2. Have you tested transfer learning performance? For example, if a vision model is pre-trained with ScMoE and fine-tuned on a different task, does the shortcut mechanism still provide benefits? 3. Is ScMoE scalable to larger MoE models (e.g., 13B, 30B, 70B, and even DeepSeek-V3 671B)? I am concerned about the convergence and model quality. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > 1. Concerns regarding the model structure may be outdated, particularly in relation to the compatibility of ScMoE with the mainstream MoE architecture, where MoE layers are arranged sequentially rather than interspersed with MLP layers. We would like to clarify that we have already conducted experiments on the current mainstream MoE structure of placing MoE module in every Transformer block, as presented by the LLaMA2-MoE experiments in Section 4.2.2, Table 2 and Table 7. Specifically, our LLaMA2-MoE experiments utilize ScMoE (Pos-1) and incorporates the MoE module in every Transformer block, which overlaps the communication and computation within the each Transformer block. Additionally, we conduct experiments on OLMoE model, an advanced MoE model that integrates an MoE module at each layer. As shown in **the table "Experimental results on integrating ScMoE architecture into the OLMoE Model" in our response to Reviewer GE4M**, ScMoE architecture not only achieves acceleration, but also achieves a marginally higher average accuracy. > 2. Concerns regarding stronger baselines. We select a recently proposed All-to-All optimization technique, "MoE Shared Expert Overlap," which overlaps All-to-All communication with the computation of shared experts, as a stronger baseline for comparison. As shown in the following table, ScMoE achieves acceleration compared to "MoE Shared Expert Overlap," facilitated by breaking dependencies for improved overlap. |SwinV2-MoE|Train Speedup|Inference Speedup| |-|-|-| |Standard Top-2|0.76x|0.68x| |Shared-Expert|0.94x|0.92x| |MoE Shared Expert Overlap|1|1| |ScMoE|1.14x|1.25x| > 3. Concerns regarding larger multi-node cluster analysis. Does a larger number of experts negatively impact ScMoE’s effectiveness? To demonstrate scalability in large-scale clusters, we conducted simulations across different number of GPUs, from 16 to 128, as shown in **the table "Simulation of larger-scale model training" in our response to Reviewer GE4M**. Despite differences in communication and computation durations across configurations, ScMoE consistently demonstrates effectiveness. Notably, in entries No.5 to No.8, an increased number of experts does not negatively affect the effectiveness of ScMoE. > 4. Incremental improvement: ScMoE mainly extends shared-expert MoE with shortcut connections. Our work focuses on integrating shortcut connections into the MoE architecture to enhance efficiency, which does not rely on the shared-expert MoE. As discussed in Appendix A.2, our DGMoE, which incorporates shortcut connection without shared expert, also offers improvements over the top-2 MoE. ScMoE is presented as the most efficient architecture to facilitate our proposed shortcut connections. Furthermore, ScMoE functions alongside both the standard MoE and shared-expert MoE architectures. When designing models, practitioners have the option to choose from these three architectures. > 5. Are GPT-2/LLaMA2 MoE models trained from scratch or fine-tuned from a dense model? The models are trained from scratch. We introduce the ScMoE architecture as a viable option for pre-training a MoE model from scratch, enhancing the efficiency of its pre-training, fine-tuning, and inference. > 6. Are the experiments conducted on FP32 precision of BF16 precision? BF16 precision. > 7. Would it be possible to quantify and report exact percentages of communication hidden by computation in Table 1 or 2? The exact percentages for Table 1 are directly reflected in Figure 7 (1), which hides 70% communication. > 8. What happens when communication is much slower than computation? Can ScMoE still provide significant speedup? As shown by the simulation results of No. 1 in Question 3, ScMoE achieves a significant speedup of 1.31x, even when the communication time is three times longer than the computation duration. > 9. Do you observe any convergence differences between ScMoE and top-2 MoE during training? If so, does ScMoE require different hyperparameters (learning rate, warm-up schedule, etc.)? We observed that the loss curve of ScMoE (3.2629 at the final point) demonstrates faster convergence and a lower final loss compared to the standard top-2 MoE (3.3157 at the final point). To ensure a fair comparison in our experiments, ScMoE demonstrates comparable accuracy under identical hyperparameters. > 10. Have you tested transfer learning performance? We conduct additional fine-tuning experiments on the trained OLMoE models discussed in Question 1, using 5,000 samples from Alpaca dataset. The results shows that ScMoE still achieves a higher average accuracy (44.88% vs 44.07%). > 11. Is ScMoE scalable to larger MoE models (e.g., 13B, 30B, 70B, and even DeepSeek-V3 671B)? Due to our limited devices, we face challenges in conducting experiments on large-scale MoE models. Nevertheless, we offer theoretical analysis to support the effectiveness of our methods, as discussed in Section 5 and Appendix A.1.
Summary: MoE model is an effective way to scale up model parameters while preserving the inference latency. When training MoE models, due to the extremely large parameters, expert parallelism is widely used to distribute the computational workload. However, this also introduces expensive all-to-all communication cost. This paper proposes a new MoE architecture to overlap communication and computation, which helps to greatly save the training time. Small-scale experiments show that the new architecture can speedup the training while maintaining the same model performance as the original MoE architecture. Claims And Evidence: Yes Methods And Evaluation Criteria: I think the accuracy on benchmark datasets are a bit low to provide meaningful comparisons. It'd be better to train bigger models or longer. Theoretical Claims: NA Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: The authors propose a new MoE architecture which may be insightful to readers. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - The idea of overlapping communication and computation in MoE training is novel and interesting. This could be a useful technique for practitioners. Weakness: - It would be great if the idea can be validated at larger model scale and longer training runs. Currently the accuracy on benchmarks are a bit low. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > 1. I think the accuracy on benchmark datasets are a bit low to provide meaningful comparisons. It'd be better to train bigger models or longer. It would be great if the idea can be validated at larger model scale and longer training runs. Currently the accuracy on benchmarks are a bit low. Given our constraints on available hardware, we encounter difficulties in conducting empirical experiments on large-scale, industrial-level MoE models. Nevertheless, we perform additional experiments on the OLMoE model [1], an advanced MoE model that activates 1B of its total 7B parameters. We initiate training from scratch for both the original OLMoE and the OLMoE+ScMoE models using 40,000 samples from the TuluV3 dataset [2]. As illustrated in the following table, our proposed ScMoE architecture not only accelerates the training by 1.13x and inference by 1.19x, but also achieves a marginally higher average accuracy (43.77% vs 43.25%). **Experimental results on integrating ScMoE architecture into OLMoE model [1].** | | Train Speedup | Inference Speedup | ARC-C | BoolQ | OBQA | RTE | WG | PIQA | AVG | | ----------- | ------------- | ----------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | | OLMoE | 1 | 1 | 24.91% | 58.30% | 30.00% | 44.77% | 49.64% | 51.90% | 43.25% | | OLMoE+ScMoE | 1.13x | 1.19x | 25.51% | 58.10% | 30.80% | 45.49% | 49.25% | 53.48% | 43.77% | Moreover, we offer theoretical analysis upon our empirical findings to support the effectiveness of our methods. Specifically, we observe a correlation between the efficacy of ScMoE and the high similarity of intermediate representations. Given that such high similarity is commonly found in Transformer-based LLMs [3,4,5,6], we believe our ScMoE approach can be effective in larger-scale LLMs. Furthermore, our theoretical analysis of gradient propagation also demonstrates its ability to achieve consistent training while preserving model quality. Additionally, we conduct simulations to validate the effectiveness of speedup in larger-scale scenarios (larger model and larger number of GPUs). Our simulations employ the ASTRA-SIM simulator [7], which models an A800 cluster in alignment with the real-world HPC cluster utilized for evaluations in Section 4. Specifically, the FP16 computational capability is configured to 312 TFLOPS, the intra-node NVLink bandwidth is set at 400 GB/s, and the inter-node network employs a 200Gb/s (25GB/s Bandwidth) InfiniBand connection. We conduct simulations of the forward pass for LLaMA2-MoE, maintaining similar attention configurations to experiments in Table 2. Since the load balance of MoE significantly influences its efficiency and is determined by various inputs, we simulate a fully balanced distribution of input tokens to eliminate any random interference in this aspect. In the simulation, we vary the configurations for the number of GPUs, the number of active and total experts, and expert intermediate size. As illustrated in the following table, while different configurations lead to varying communication and computation durations, our overlapping strategies consistently prove to be effective. **Simulation of larger-scale model training.** |No.|GPU Num|Act./Total Experts|Expert Intermediate Size|Communication Time (us)|Computation Time (us)|Overlap Speedup| |-|-|-|-|-|-|-| |1| 128| top-16 / 128| 1024| 7056| 2169| 1.31x | |2| 64| top-8 / 64| 2048| 6345| 2968| 1.47x| |3| 32| top-4 / 32| 4096| 5994 | 4578| 1.76x| |4| 16| top-2 / 16| 8192 | 5832| 7805| 1.75x| |5| 128| top-16 / 128| 8192| 7056| 7819| 1.90x| |6| 64| top-8 / 64|8192| 6345| 7811| 1.81x | |7| 32| top-4 / 32|8192| 5994| 7807 | 1.77x | |8| 16| top-2 / 16| 8192 | 5832| 7805| 1.75x| |9| 16| top-8 / 128| 8192| 23229| 20725| 1.89x| **Reference** [1] Muennighoff, Niklas, et al. "Olmoe: Open mixture-of-experts language models." *arXiv preprint arXiv:2409.02060* (2024). [2] Lambert, Nathan, et al. "Tulu 3: Pushing frontiers in open language model post-training." arXiv preprint arXiv:2411.15124 (2024). [3] Sun, Qi, et al. "Transformer layers as painters." arXiv preprint arXiv:2407.09298 (2024). [4] He, Shwai, et al. "What matters in transformers? not all attention is needed." arXiv preprint arXiv:2406.15786 (2024). [5] Men, Xin, et al. "Shortgpt: Layers in large language models are more redundant than you expect." *arXiv preprint arXiv:2403.03853* (2024). [6] Gromov, Andrey, et al. "The unreasonable ineffectiveness of the deeper layers." *arXiv preprint arXiv:2403.17887* (2024). [7] Rashidi, Saeed, et al. "Astra-sim: Enabling sw/hw co-design exploration for distributed dl training platforms." ISPASS. IEEE (2020).
null
null
null
null
KGMark: A Diffusion Watermark for Knowledge Graphs
Accept (poster)
Summary: Briefly summarize the paper (including the main findings, main results, main algorithmic/conceptual ideas, etc. that the paper claims to contribute). This summary should not be used to critique the paper. A well-written summary should not be disputed by the authors of the paper or other readers. This paper introduces KGMark, the first watermarking method for knowledge graph embeddings. It leverages a latent diffusion model to embed watermarks in the frequency domain, using a learnable mask to ensure transparency. Experimental results show its high detectability and robustness against various attacks. Claims And Evidence: The claims made in the submission regarding KGMark’s efficacy in watermarking knowledge graphs are supported by clear and convincing evidence: 1. Theoretical Analysis: The paper provides a detailed theoretical framework, including the use of latent diffusion models, Fourier transform-based watermark embedding, and the Learnable Adaptive Watermark Mask Matrix (LAWMM) to ensure transparency and robustness. It also introduces principles like Latent Space Equilibrium and Information-Theoretic Robustness to justify the method’s design. 2. Experimental Results: The claims are empirically validated through extensive experiments on three diverse datasets (Last-FM, MIND, and Alibaba-iFashion). The results demonstrate high watermark detectability (AUC up to 0.99) and robustness against various attacks (e.g., relation alteration, triple deletion, isomorphism variation) while maintaining minimal impact on the knowledge graph’s usability. Comparative Analysis: The paper compares KGMark with its variants (e.g., without LAWMM, only community layer, only vertex layer) and shows that the full method outperforms these variants in terms of robustness and transparency, supporting the claim that the combined approach is superior. Methods And Evaluation Criteria: The proposed methods, evaluation criteria, and benchmark datasets are well-aligned with the problem of watermarking knowledge graphs: 1. Methods: The proposed KGMark framework leverages a latent diffusion model to embed watermarks in the frequency domain, addressing the unique challenges of spatial-temporal variations and structural complexities in dynamic knowledge graphs. The use of a Learnable Adaptive Watermark Mask Matrix and redundant embedding strategies enhances transparency and robustness while ensuring minimal disruption to the knowledge graph’s usability. 2. Evaluation Criteria: The evaluation comprehensively assesses watermark detectability using AUC, transparency through cosine similarity and downstream task performance metrics (e.g., GMR, HMR, AMR, Hits@10), and robustness against various attacks (e.g., relation alteration, triple deletion, isomorphism variation). These metrics effectively capture the critical aspects of watermarking performance in knowledge graphs. 3. Benchmark Datasets: The experiments are conducted on three diverse public datasets—Last-FM, MIND, and Alibaba-iFashion—which represent different real-world applications. These datasets provide a robust basis for evaluating the method’s effectiveness and generalizability across various knowledge graph structures and domains. Theoretical Claims: The paper presents theoretical claims supported by principles such as Latent Space Equilibrium and Information-Theoretic Robustness. These principles aim to ensure watermark transparency and robustness through mathematical formulations. The claims appear logically consistent and align with the problem context. Experimental Designs Or Analyses: The paper presents a well-structured experimental design with: 1. Comprehensive evaluations on three benchmarks (Last-FM, MIND, Alibaba-iFashion) across diverse KG structures. 2. Rigorous testing of watermark detectability, transparency, and robustness against multiple attacks. 3. Comparative analysis of KGMark variants to highlight key components and their combined benefits. Supplementary Material: The supplementary material provides additional details that support the main paper, including Algorithm 1 and Algorithm 2. These algorithms offer clear procedures for graph alignment and redundant watermark embedding, which enhance the method’s robustness against structural variations and attacks. The case study in the supplementary material further validates the transparency of the watermarking process by demonstrating minimal impact on the original embedding distribution. Relation To Broader Scientific Literature: The paper effectively situates its contributions within the broader scientific context by drawing on key concepts from diffusion models, watermarking techniques[1] [2], and knowledge graph[3] management. It leverages the strengths of diffusion models to embed watermarks in a way that balances transparency and robustness[4], building on prior work that has demonstrated the effectiveness of these models in generating and protecting synthetic data[5]. Additionally, the paper integrates ideas from graph neural networks and watermarking to address the unique challenges of structured data, such as graph isomorphism and dynamic updates. 1. Wen, Y., Kirchenbauer, J., Geiping, J., and Goldstein, T. Tree-rings watermarks: Invisible fingerprints for diffu sion images. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems, pp. 58047 58063. Curran Associates, Inc., 2023. 2. Yang, Z., Zeng, K., Chen, K., Fang, H., Zhang, W., and Yu, N. Gaussian shading: Provable performance-lossless image watermarking for diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12162–12171, 2024b. 3. Yang, Y., Chen, J., and Xiang, Y. A review on the reliability of knowledge graph: from a knowledge representation learning perspective. World Wide Web, 28(1):4, 2024a. 4. Barman, N. R., Sharma, K., Aziz, A., Bajpai, S., Biswas, S., Sharma, V., Jain, V., Chadha, A., Sheth, A., and Das, A. The brittleness of ai-generated image watermarking techniques: Examining their robustness against visual paraphrasing attacks. arXiv preprint arXiv:2408.10446, 2024. 5. Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., and Foster, I. Comprehensive exploration of synthetic data generation: A survey, 2024. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. Originality and Creativity: Adapting diffusion-based watermarking to structured data like knowledge graphs is novel. Traditional methods for images or text often fail to account for spatial-temporal variations and relational complexities, which KGMark effectively addresses. 2. Clarity and Presentation: The paper presents its methodology clearly, with detailed explanations of theoretical foundations and practical implementations. Diagrams and pseudocode enhance understanding. Weaknesses: 1. Scalability Concerns: The experiments are limited to relatively small datasets, raising questions about the method’s scalability to larger, more complex knowledge graphs like Wikidata. Addressing scalability is crucial for real-world applications, where knowledge graphs often contain millions of entities and relationships. 2. Generalizability: The paper lacks a thorough discussion on how well KGMark would perform on different types of knowledge graphs (e.g., biomedical, social networks) or in diverse application scenarios. Demonstrating generalizability across various downstream domains would strengthen the paper’s impact. 3. Comparative Analysis: While the paper compares KGMark with its variants, it lacks a direct comparison with existing watermarking methods from other domains (e.g., image or text watermarking). Such comparisons would provide a clearer picture of KGMark’s unique advantages and potential limitations. Other Comments Or Suggestions: Overall, the paper is well-written and presents a novel approach to watermarking knowledge graphs. Here are a few suggestions: 1. Clarify Scalability: Include a brief discussion on how KGMark can be scaled to handle larger knowledge graphs. 2. Expand Generalizability: Add a section on potential applications beyond the tested datasets to highlight broader applicability. 3. Proofread: Minor typos and grammatical errors should be corrected to enhance readability. For example, the Method section inconsistently refers to equations as "Equ. (x)" in some places and "Equation (x)" in others, which should be standardized..、 Questions For Authors: 1. Comparative Analysis: How does KGMark compare to existing watermarking methods from other domains (e.g., image or text)? A direct comparison would highlight its unique advantages. 2. Optimal Embedding Strategy: What is the optimal strategy for selecting vertices or communities for watermark embedding, and how does it influence performance under various attacks? 3. Robustness vs. Transparency Trade-off: How does the learning process of the adaptive watermark mask matrix ensure it balances robustness and transparency without overfitting? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer voAk: Thank you for your detailed and constructive feedback. We note that your concerns mainly focus on four aspects: - Scalability and generalizability of KGMark - Comparative analysis - Embedding strategy and robustness–transparency trade-off - Presentation and writing quality ----- ## Dataset Scale and Heterogeneous **Scale and Generalization** KGMark is evaluated on three large-scale, real-world knowledge graphs from diverse domains, ensuring both scalability and generalizability:: - Alibaba-iFashion: 1.21B user clicks from 5.54M users, 4.68M products, and 192K outfit sets. It supports both recommendation and compatibility prediction tasks, and has been deployed in real-world e-commerce systems. - Last-FM: Combines user-tag data from Last. fm with the Million Song Dataset and playlists, covering 5,075 songs with emotion labels and 464K triples. It enables tasks like emotion classification and multimodal analysis. - MIND: Includes ~1M users and 161K news articles with 2.4M click records and 512 relation types. It supports news recommendation, classification, and content-based cold-start modeling. The variety in domains (fashion, music, news) and graph structures allows KGMark to be evaluated under diverse, realistic conditions, ensuring broad generalization ability. **Heterogeneous Graph Structures** All datasets naturally exhibit heterogeneous characteristics. Nodes represent different entity types (e.g., users, products, songs, articles), and edge semantics vary (e.g., click, purchase, like). For example, MIND includes 512 relation types, indicating complex, multi-relational structures. Modeling these interactions requires a heterogeneous graph framework, which KGMark is designed to support. ----- ## Baselines We have added additional experiments on the baselines, as presented in our response to Reviewer *dihZ (Performance vs. Baseline)* and *zE4M (Adversarial Attacks)*. ------ ## Transparency Strategy To further clarify the role of the adaptive mask matrix, we describe how it contributes to both the robustness and transparency of watermark embedding. We aim to balance watermark detectability with minimal impact on downstream tasks by jointly minimizing reconstruction and task-related loss: $$ \mathcal{L} = \sum_{j \in [1, \mathcal{T}]} \left\| \mathcal{Z} _ {\mathcal{T}-k_j}^{\text{INV}} - f _ {\text{DDIM}}^{k_j} \left( f _ w(\mathcal{Z} _ \mathcal{T}^{\text{INV}}, \mathcal{S}, \mathbb{M}), \mathcal{T} \right) \right\|^2 $$ In the refined objective, we introduce two key changes to improve both performance and efficiency: 1. While $f_w$ learns to embed the signature $\mathcal{S}$ via the mask $\mathbb{M}$, the resulting latent representation may deviate from the original, potentially altering structural information. To improve alignment, we introduce a correction term $\alpha \mathcal{S} \cdot \mathbb{M}$, which reduces the gap between the watermarked and original latent representations. This better alignment helps preserve the graph's inherent structure, indirectly supporting downstream task performance. 2. Since gradients on $\mathbb{M}$ accumulate across all diffusion steps, direct optimization becomes computationally intensive. To mitigate this, we adopt a "sample-then-embed" strategy. First sampling the latent representation and then applying watermark embedding which simplifies training and reduces complexity. The resulting loss is: $$ \mathcal{L} = \sum_{j \in [1, \mathcal{T}]} \left\| \mathcal{Z} _ {\mathcal{T}-k_j}^{\text{INV}} - \left[ f _ w\left(f _ {\text{DDIM}}^{k_j}(\mathcal{Z} _ \mathcal{T}^{\text{INV}}, \mathcal{T}), \mathcal{S}, \mathbb{M}\right) + \alpha \mathcal{S} \cdot \mathbb{M} \right] \right\|^2 $$ This formulation enables efficient optimization while enhancing the transparency of the embedded watermark. **Training Process of Adaptive Mask Matrix:** Another goal of the adaptive mask matrix is to improve watermark transparency without sacrificing robustness. As shown in Figure 3(c), the density of the watermark mask matrix is the primary factor affecting transparency. In contrast, its impact on watermark detectability (i.e., robustness) is relatively minor. Based on this observation, LAWMM is designed to learn an adaptive mask matrix that maintains a fixed density while improving transparency through training. To ensure control over the final density, we regulate the number of training epochs. We find that setting the number of epochs to 50 yields an average density of approximately 0.015, which aligns with our desired sparsity level. ------ ## Writing Issues In the revised version, we will carefully review the manuscript to correct any grammatical issues and improve overall clarity and writing style. *Your insights were extremely helpful in improving the presentation of our work. We hope the revisions and clarifications reflect the value of our contribution more clearly.* --- Rebuttal Comment 1.1: Comment: I have read the author's rebuttal and the review of other reviewer. I'd love to increase my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your thoughtful feedback and the updated score, which reflect your recognition of the improvements and clarifications we have made. If the paper is accepted, we will ensure that all additional experiments and textual revisions introduced during the rebuttal are fully integrated into the final version, along with a thorough proofreading of the entire manuscript.
Summary: This manuscript presents a novel watermarking method for knowledge graph embeddings to ensure the traceability and auditability of knowledge graphs, claiming to embed invisible signatures into diffusion-based latent representations using the Fourier transform. It addresses key challenges by incorporating multi-level redundancy, graph alignment, and a learnable mask to enhance robustness and transparency. Claims And Evidence: This paper emphasizes the detectability, transparency, and robustness of the proposed watermarking method, KGMark. These claims are supported by the following aspects: 1. Theoretical Derivation: The paper rigorously presents the mathematical formulation of KGMark along with detailed derivations, ensuring the theoretical correctness of the proposed method. 2. Experimental Evidence: Comprehensive evaluations are conducted to assess the detectability, transparency, and robustness of KGMark, with results demonstrating the effectiveness of the approach. 3. Case Study: The paper provides a case study for specific downstream tasks, showcasing the practical utility of KGMark in real-world scenarios. Methods And Evaluation Criteria: The proposed methods and evaluation criteria align well with the problem at hand. The study: 1. The AUC is used to evaluate the detectability and robustness of KGMark. 2. The transparency of KGMark is evaluated by the cosine similarity and quality metrics of knowledge graph embeddings. 3. Extensive ablation experiments have been performed on multiple variants of KGMark and their associated hyperparameters. Theoretical Claims: In this paper, the mathematical expression of KGMark and the related theoretical derivation are given, and the attacks on KGmark are modeled. The proofs are logically sound and mathematically rigorous. Experimental Designs Or Analyses: The paper selects three KG datasets from different domains to evaluate KGMark. 1. The detectability of KGMark on clean samples was evaluated. 2. The robustness of KGMark under different kinds and intensities of attacks is evaluated. 3. The transparency of KGMark is evaluated by the similarity and quality of knowledge graph embeddings. Supplementary Material: I have examined the contents of the appendix, such as relevant algorithms, additional theoretical proofs and case studies, and found no major flaws. Relation To Broader Scientific Literature: This paper aims to introduce watermarking technology into knowledge graph, which ensures the traceability and auditability of knowledge graph and protects its intellectual property rights to a certain extent. I think this is a meaningful and useful exploration. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strength: KGMark presents an outstanding exposition of a novel watermarking method for KGEs. Driven by an insightful motivation to protect the copyrights, the proposed KGMark framework introduces key innovations to enhance both transparency and robustness. The Learnable Adaptive Watermark Mask Matrix improves transparency, while multi-level redundancy ensures resilience against structural modifications. Additionally, the incorporation of graph alignment effectively mitigates challenges arising from isomorphism variations. Overall, this approach demonstrates significant potential for safeguarding intellectual property and maintaining data integrity in KGEs. ## Weakness: - The lack of comparison with relevant baselines makes the effectiveness of the proposed method in various aspects unable to be more credible verified. - Although the paper introduces a learnable mask matrix to improve transparency, the relationship between this matrix and the redundant embedding strategy is not explicitly clarified. Further explanation on how these two components interact and contribute to watermark robustness and transparency would enhance the paper. - The case study demonstrates KGMark in a recommendation system. However, it remains unclear whether the authors have tested the method in other domains. Expanding the evaluation to other real-world applications would strengthen the paper’s claims of versatility. Other Comments Or Suggestions: - While KGMark is the first watermarking scheme designed for knowledge graphs, it would benefit from additional comparisons with existing watermark methods to provide a clearer context for its innovations. - The experiments are conducted on relatively small datasets. Evaluating KGMark on larger knowledge graphs would better demonstrate its scalability and robustness in more realistic settings. - The paper does not discuss the computational efficiency of KGMark for embedding watermarks in knowledge graphs of varying sizes. Including an analysis of the method’s performance in terms of computational resources and time complexity across different scales would be valuable. Questions For Authors: - Considering the large number of hyperparameters involved in the paper, can you explain the basis for configuring these parameters? - The paper mentioned the use of community algorithm to segment the graph. Can you explain the necessity of introducing community algorithm? Why don't we directly select several vertices on the graph for redundant embedding? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 1f2k: We sincerely appreciate your thoughtful comments, which primarily concern the following three aspects: - Design choices behind KGMARK - Algorithmic ration for the redundant embedding strategy - Consideration of computational efficiency ----- ## KGMark's Designing KGMARK is specifically designed for knowledge graphs (KGs), which differ significantly from images or text in both structure and semantics. Traditional methods developed for images or text are difficult to apply to KGs due to their unique characteristics: - **Non-Euclidean topology**: KGs lack the spatial continuity of images, making frequency or pixel-based watermarking inapplicable. - **High sensitivity to structural changes**: Small perturbations can cause large semantic shifts. - **Large scale and sparsity**: Real-world KGs require scalable and efficient embedding strategies. Traditional watermarking techniques often assume dense and continuous data representations, making them ill-suited for the sparse and structured nature of KGs. These methods typically lack structural adaptability and fail to account for the non-Euclidean topology, rendering them vulnerable to perturbations such as isomorphic transformations or large-scale deletions. As a result, they often fail to ensure robustness or transparency in such settings. For example, Gaussian-Shading is designed for image generation, where watermarks are embedded by modifying the initial latent noise during sampling. This approach relies on controlling the sampling process from the start. In our setting, however, the graph is already generated, and we recover its latent state via DDIM inversion. Applying Gaussian-Shading post hoc would overwrite the inverted latent representation and break the reconstruction process, making it incompatible with our framework. KGMARK is thus tailored to address these challenges through graph-aware mechanisms. By leveraging graph alignment and community-based redundant embedding, it ensures both robustness and transparency under structural perturbations. We further demonstrate the effectiveness of KGMark across multiple baselines, with detailed results presented in our response to *Reviewer dihZ (Performance vs. Baseline)*. ----- ## Robustness Strategy To ensure robustness against adversarial perturbations, we require that the amount of information retained about the original watermark $\mathcal{S}$ remains above a guaranteed threshold, even after the graph is modified. This is quantified by the mutual information between the original watermark and the extracted result under perturbation: $$ \inf_{\|\Delta A\|_0 \leq \delta} I(\mathcal{S}; T(\mathcal{G}^w + \Delta A)) \geq \beta $$ Here, $\Delta A$ denotes a bounded structural modification to the watermarked graph $\mathcal{G}^w$, and $T(\cdot)$ is the watermark extraction function. The inequality ensures that even under worst-case sparse attacks (e.g., modifying up to $\delta$ edges), the extractor can still recover meaningful information about $\mathcal{S}$. To satisfy this robustness condition, we adopt a redundant embedding strategy that spreads the watermark across both global and local graph structures. Specifically, we partition the graph $\mathcal{G}$ into $l$ communities $\{ \mathcal{C} _ i\} _ {i=1}^l$, with vertices ranked by their centrality $\eta(v)$. The watermark is then embedded as: $$ \mathcal{W}(\mathcal{G}) = \bigcup _ {i=1}^l \Phi(\mathcal{C}_i) \cup \bigcup _ {v \in \mathcal{C}_i} \Psi(v) $$ Here, $\Phi(\mathcal{C}_i)$ encodes $\mathcal{S}$ into the structural signature of each community, while $\Psi(v)$ encodes it around high-centrality vertices. This dual-layer design ensures that the watermark is preserved even when parts of the graph are modified, thereby improving robustness by maximizing the information retained across complementary substructures. To justify our choice of high-centrality vertices for embedding, we highlight three key considerations: - We assume attackers aim to modify the graph without harming its usability, and thus tend to avoid high-centrality nodes. - High-centrality nodes are structurally stable due to their dense connections, making embedded watermarks more resilient to local changes. - Although community detection may vary across runs, unstable nodes are few and weakly connected. They are excluded from embedding, so this does not affect redundancy or robustness. ----- ## Time complexity The computational cost of KGMark is primarily determined by the watermark embedding and optimization processes, with the majority of the time spent on DDIM inference. The runtime is directly correlated with the size of the knowledge graph. $$ AvgTimeCost = \frac{TotalTimeCost}{\text{Number of Nodes}} $$ To provide a more accurate evaluation, we will include the average runtime of KGMARK in the revised version. *We appreciate your detailed comments and hope our responses have clarified the key contributions and addressed your concerns.* --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the substantial efforts made during the rebuttal phase. The additional analysis on computational overhead has improved the overall clarity of the paper, making the presentation of the method and experiments more complete. Based on this, I recommend accepting the paper. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your positive feedback and your recommendation to accept the paper. We are encouraged by your recognition of the contributions made in this work. KGMark is the **first watermarking framework** specifically designed for generated knowledge graphs. Unlike existing watermarking schemes that often overlook the unique structural and semantic properties of KGs, KGMark introduces a diffusion-based embedding mechanism to ensure robustness and transparency under structural perturbations. If the paper is accepted, we will incorporate all experimental improvements and textual refinements introduced during the rebuttal process into the final version, and ensure the manuscript is thoroughly proofread.
Summary: This paper proposes a watermarking method for knowledge graphs (KGs) using diffusion models. The authors claim their method embeds watermarks into KGs via diffusion-based encoding, ensuring traceability, integrity, and copyright protection. The method primarily relies on diffusion encoding, subgraph preservation principles, and loss functions aimed at robustness against certain graph-based attacks. Experimental evaluations were conducted on datasets such as AliF and MIND to validate the effectiveness of watermark embedding, extraction accuracy, and impact on downstream tasks. Claims And Evidence: The authors' key empirical claims lack strong evidence. The claim of robustness against graph-based attacks is weak, as the evaluations do not compare against advanced adversarial attacks. Similarly, the claim of preserving graph integrity and traceability is undermined by significant performance drops in downstream tasks like MIND and Alif. These issues reveal major gaps in empirical support, further compounded by the absence of baseline comparisons on diffusion watermarking [1,2]. [1] Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models [2] Tree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust Methods And Evaluation Criteria: The proposed methods and evaluation criteria have several fundamental issues. Primarily, the choice to use diffusion models for watermark encoding on graphs is not well-justified, especially given significant drops in Hit@10 performance on downstream tasks (MIND and Alif) observed even with reconstruction alone. Furthermore, essential evaluations against more advanced attack methods prevalent in recent graph attack literature are missing such as [3,4]. [3] Adversarial attacks on knowledge graph embeddings via instance attribution methods [4] Adversarial attacks and defenses on graphs Theoretical Claims: The paper contains inconsistencies and ambiguities in its theoretical formulations, particularly in equations (4) and (9), where the definition of variable S changes without clarification. Additionally, critical conceptual misunderstandings or mislabeling, such as referring to objectives as "principles" add to the confusion. Experimental Designs Or Analyses: The experimental design has several critical flaws. The evaluation fails to compare with state-of-the-art diffusion-based watermarking methods [1,2]. Additionally, the significant performance drop on the AliF and MIND datasets, as shown in previous sections, raises serious concerns about its paradigm suitability. Supplementary Material: I reviewed the supplementary material, which includes some important algorithm details missing from the main paper. The main key parts of watermark extraction and algorithm specifications are only in the appendix, making the main paper less clear and harder to understand on its own. Relation To Broader Scientific Literature: The paper is mainly related to diffusion-based watermarking methods, graph watermarking, graph robustness and graph embedding techniques. Essential References Not Discussed: The paper does not discuss significant recent advances in diffusion-based watermarking methods [1,2], and graph robustness [3,4]. Other Strengths And Weaknesses: While the idea of integrating diffusion models with watermarking in graphs is promising, poor execution and writing severely limit its impact. The paper lacks a clear motivation for using diffusion models for watermarking, and inconsistent, confusing technical explanations further weaken its clarity. Sections 3.3 and 3.5 are particularly unclear, making it difficult to understand what is being optimized, how different loss terms interact, or how watermark extraction works in practice. Additionally, the challenges posed by heterogeneous nodes and relations in knowledge graphs are not addressed. Other Comments Or Suggestions: Statements like "potentially introducing harmful content that compromises analyses or even facilitates the exploitation of real-world systems" and discussions such as "unnerves large corporations, let alone individual researchers" in page 1 are vague or inaccurate. In Figure 2, community detection and alignment correspond to Section 4.4, and Section 4.3 relates to extraction. However, since these are experimental sections, shouldn't they reference Sections 3.5 and 3.3 instead? Questions For Authors: Please refer to the weakness mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer dihZ: We sincerely thank you for your constructive and thoughtful suggestions. We understand that your comments mainly concern the following four aspects: - The robustness of KGMARK under stronger adversarial attacks - Concerns about KGMARK’s performance across datasets - Explanation of the strategies in *Learnable Adaptive Watermark Mask Matrix* and *Defending Isomorphism and Structural Variations* - Writing and presentation issues ------ ## robustness Guided by the reviewer's suggestions, we have added experiments with stronger adversarial attacks([1], [2]) and included corresponding baselines([3], [4], [5], [6]). The results are presented in our response to *Reviewer zE4M (Adversarial Attacks)*. We hope this clarifies the robustness of KGMARK. ------ ## Performance vs. Baseline Guided by the reviewer's suggestion about KGMARK's performance across datasets, we have added comparisons with **four watermarking baselines** (two preprocessing-based, two diffusion-based). | Datasets | Method | CosSim@50 | CosSim@65 | CosSim@75 | GMR ↓ | HMR ↓ | AMR ↓ | Hits@10 ↑ | | ----------- | ----------- | --------- | --------- | --------- | ------ | ----- | -------- | --------- | | **AliF** | Original KG | - | - | - | 1.828 | 1.162 | 135.459 | 0.8980| || DwtDct| 0.7215 | 0.7928 | 0.8251 | 5.096 | 1.699 | 157.036 | 0.6933| || DctQim| 0.7509 | 0.7633 | 0.7653 | 5.104 | 1.654 | 161.142 | 0.7385| || TR | 0.7761 | 0.8431 | 0.9071 | 3.928 | 1.618 | 152.634 | 0.8017| || GS | 0.2879 | 0.3226 | 0.3538 | 6.641 | 1.798 | 172.813 | 0.5137| || **KGMark** | **0.7839** | **0.8309** | **0.9482** | **3.046** | **1.580** | **141.904** | **0.8296** | | **MIND** | Original KG | - | - | - | 7.197 | 1.975 | 155.656 | 0.6649| || DwtDct | 0.7831 | 0.8244 | 0.8312 | 11.328 | 2.328 | 188.992 | 0.5216| || DctQim| 0.7549 | 0.7574 | 0.7703 | 12.037 | 2.753 | 205.483 | 0.4835| || TR| 0.7976 | 0.8108 | 0.8581 | 11.102 | 2.297 | 182.925 | 0.5108| || GS| 0.2843 | 0.3196 | 0.3728 | 14.523 | 3.312 | 234.064 | 0.3940| || **KGMark**| **0.8083** | **0.8533** | **0.9397** | **10.508**| **2.226** | **169.305** | **0.5683**| | **Last-FM** | Original KG | - | - | - | 3.571 | 1.202 | 1711.695 | 0.8436| || DwtDct| 0.7215 | 0.7928 | 0.8433 | 4.519 | 1.502 | 1734.823 | 0.8221| || DctQim| 0.7509 | 0.7633 | 0.7679 | 5.139 | 1.704 | 2043.249 | 0.7264| || TR| 0.7262 | 0.7896 | 0.8364 | 4.733 | 1.652 | 1772.961 | 0.8149| || GS| 0.3252 | 0.3649 | 0.4184 | 6.349 | 2.016 | 2192.492 | 0.6463| || **KGMark** | **0.8876** | **0.9051** | **0.9161** | **4.455** | **1.452** | **1716.365** | **0.8430** | We hope the results of these additional experiments demonstrate that KGMark consistently outperforms TreeRing (TR)[3], GaussianShading (GS)[4], both of which replace 5% of nodes with watermark data across multiple downstream tasks. DwtDct [5] and DctQim [6] are both frequency-domain post-processing watermarking techniques that embed watermark signals into transformed frequency coefficients, aiming to balance robustness and imperceptibility. [3] Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models [4] Tree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust [5] Digital Watermarking and Steganography [6] A class of provably good methods for digital watermarking and information embedding ------ ## Explanation of Strategies A detailed explanation of the rationale behind KGMARK’s design is presented in our response to *Reviewer 1f2k (KGMARK’s Designing)*. We have also elaborated on the core strategies and their underlying motivations in our responses to *Reviewer 1f2k (Robustness Strategy)* and *Reviewer voAk (Transparency Strategy)*. In the revised version, we will revise the relevant sections(3.3 and 3.5) in the our paper to provide a clearer and more comprehensive explanation of both the learnable mask mechanism and the robustness design for handling isomorphism and structural variations. ----- ## Writing Issues We sincerely appreciate your careful feedback and will make the following revisions in the final version: 1. Correcting the section reference in Figure 2. 2. Clarifying the duplicated variable usage. 3. Relocating the algorithm for better coherence. 4. Refining Sections 3.3 and 3.5 to improve clarity and readability. We will thoroughly proofread the paper and clarified the sentences. *We thank the reviewer's helpful suggestions. The negatives pointed out by the reviewer are very insightful and have inspired us to reflect more deeply on our paper. We believe that the revisions have strengthened our paper.*
Summary: The paper presents KGMark, a watermarking framework designed for Knowledge Graphs (KGs), which are widely used in applications like semantic search, question answering, and recommendation systems. The primary goal of KGMark is to embed robust, detectable, and transparent watermarks into dynamic KGs to protect intellectual property and ensure data integrity, especially in the context of AI-generated content. Claims And Evidence: The claims made in the submission are well-supported by experimental results and case studies. However, to further strengthen the claims, the authors could consider additional experiments on testing against more extreme or adaptive attacks such as attacks specifically designed to remove watermarks. Methods And Evaluation Criteria: Standard metrics such as FPR (False Positive Rate) and TPR (True Positive Rate) for detectability and cosine similarity for transparency have been used. Theoretical Claims: The theoretical claims have been checked and to the best of my knowledge seem valid. Experimental Designs Or Analyses: The authors use three datasets (Last-FM, MIND, and Alibaba-iFashion) representing diverse real-world scenarios. These datasets are appropriate for evaluating the generalizability and effectiveness of KGMark across different domains. Supplementary Material: Yes, all supplementary material have been reviewed. Relation To Broader Scientific Literature: This work is quite innovative and to the best of my knowledge, there are no other watermarking techniques for knowledge graphs. Essential References Not Discussed: To the best of my knowledge, there are no essential references that are not discussed. Other Strengths And Weaknesses: Strengths: - The document appears to be well-structured and comprehensive. - This is an innovative work related to the watermarking of KGs Weaknesses: - Additional experiments on testing against more extreme or adaptive attacks such as attacks specifically designed to remove watermarks. Other Comments Or Suggestions: No other comments. Questions For Authors: 1. What are some limitations of this work? 2. Are there any privacy concerns that need to be discussed when the KG contains sensitive data? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer zE4M: We appreciate your insightful comments, which mainly focus on: - Empirical validation under stronger adversarial attacks - Clarification of the limitations of KGMARK - Discussion of KGMARK’s role in privacy protection for sensitive data ----- ## Adversarial Attacks We have incorporated two recent and stronger adversarial attacks NEA[1] and L2 Metric [2] into our evaluation. In the following table, we retain the original **five high-intensity attacks**, introduce **two additional adversarial attack** types, and evaluate KGMark against **four newly added baseline** methods. The results show that KGMARK consistently outperforms four baseline methods acrosss **three dataset**, demonstrating superior robustness. | Datasets | Method | Clean | Relation Alteration (50%) | Triple Deletion (50%) | Gaussian Noise (50%) | Smoothing (50%) | L2 Metric | NEA | IsoVar | | -------- | ------ | ------ | ------------------------- | --------------------- | -------------------- | --------------- | --------- | ------ | ------ | | AliF | DwtDct | 0.9837 | 0.8371 | 0.7724 | 0.8626 | 0.8053 | 0.9577 | 0.9638 | 0.6039 | | | DctQim | 0.9749 | 0.8139 | 0.7073 | 0.6949 | 0.7665 | 0.9203 | 0.9278 | 0.5867 | | | TR | 0.9814 | 0.7392 | 0.8091 | 0.8063 | 0.7823 | 0.9621 | 0.9584 | 0.6257 | | | GS | 0.9882 | 0.7998 | 0.7850 | 0.8921 | 0.7906 | 0.9364 | 0.9512 | 0.6094 | | | **KGMark** | **0.9991** | **0.9207** | **0.9320** |**0.9136** | **0.8887** | **0.9841** | **0.9809** | **0.9933** | | MIND | DwtDct | 0.9793 | 0.8161 | 0.7610 | 0.8285 | 0.8121 | 0.9358 | 0.9291 | 0.6348 | | | DctQim | 0.9785 | 0.8269 | 0.6993 | 0.7186 | 0.7935 | 0.9209 | 0.9198 | 0.5708 | | | TR | 0.9862 | 0.8171 | 0.7831 | 0.7721 | 0.8296 | 0.9682 | 0.9543 | 0.5763 | | | GS | 0.9903 | 0.7930 | 0.8284 | 0.8536 | 0.8252 | 0.9767 | 0.9681 | 0.5845 | | | **KGMark** | **0.9987** | **0.9314** | **0.9576** | **0.9232** | **0.9012** | **0.9849** | **0.9883** | **0.9842** | | Last-FM | DwtDct | 0.9801 | 0.8229 | 0.7415 | 0.8514 | 0.7740 | 0.9596 | 0.9678 | 0.6407 | | | DctQim | 0.9842 | 0.8062 | 0.7125 | 0.7383 | 0.8174 | 0.9144 | 0.9161 | 0.5938 | | | TR | 0.9879 | 0.7982 | 0.8519 | 0.8632 | 0.8531 | 0.9553 | 0.9487 | 0.6109 | | | GS | 0.9795 | 0.8303 | 0.8667 | 0.8912 | 0.8575 | 0.9638 | 0.9594 | 0.6551 | | | **KGMark** | **0.9976** | **0.9421** | **0.9031** | **0.9295** | **0.9131** | **0.9886** | **0.9814** | **0.9977** | We also highlight that the robustness enhancement in KGMARK allows it to retain watermark fidelity even under high-intensity perturbations (e.g., random deletion of 50% of entities), where baseline methods fail significantly. Another interesting observation is that while KGMark shows better performance than the baselines under low-intensity attacks (to be detailed in the extended version), its robustness remains relatively stable as the attack strength increases. In contrast, most baselines exhibit noticeable performance degradation under higher levels of perturbation. [1] Node embedding attacks via graph poisoning (NEA) [2] knowledge graph embedding attacks via instance attribution methods ------ ## Limitations While the embedding dimensionality may influence the balance between watermark detectability and downstream performance, this can be mitigated through adaptive tuning in future work. Additionally, extending our DDIM-based framework to support newer sampling strategies is a promising direction we plan to explore. ------ ## Privacy Protection When (KGs) involve sensitive information, they face several privacy risks: 1. Tampering risk: Attackers may alter the generated graph to inject harmful content, potentially leaking sensitive information. 2. Disturbing sensitive subgraphs: Even with structure-aware embedding, watermarking may affect subgraphs involving sensitive entities. 3. Key misuse: If the watermark key is exposed, it could be exploited for data tracing or structural inference. KGMARK incorporates embedding-level mechanisms to mitigate the above risks and enhance privacy protection. Moreover, KGMark is model-agnostic and can be easily integrated into existing knowledge graph embedding (KGE) frameworks. This compatibility enables combination with complementary privacy-preserving techniques (such as differential privacy) to achieve multi-layered protection. *We sincerely appreciate your valuable feedback, which has helped us improve the quality and clarity of our work. We hope our responses have addressed your concerns and clarified the contributions of our paper.*
null
null
null
null
null
null
The Noisy Laplacian: a Threshold Phenomenon for Non-Linear Dimension Reduction
Accept (poster)
Summary: The authors focus on manifold learning and the effect of noise on the Laplacian operator in that context. The paper is mainly a theretical study with some experiments to back up the claims. The scope of the experiments is rather limited as they must fulfill strong assumptions (manifold data, uniform noise). Claims And Evidence: Manifold recovery in spite of noise. First use of the Sasaki metric. The main claims are reaonable and apparently proved in a sound way. Methods And Evaluation Criteria: Looks sound (mainly theoretical proofs). Theoretical Claims: No. The claims are mainly theoretical/mathematical and they largely exceed my paygrade. As said above, the claims sound reasonable and there is no real surprise in the results (rcovery wrt noise level). Experimental Designs Or Analyses: Well illustrated experiments of manifold learning back up the claims. Mainly synthetic data owing to the strong assumptions of the theory (manifold data only, constant/uniform noise level). Some (very specific) real data is used in additional experiments. Supplementary Material: Skimmed through it. Quality seems to be on par with the main paper, namely, very good. Mainly proofs and aadditional experiments (nicely illustrated). Relation To Broader Scientific Literature: Good. Essential References Not Discussed: Good. Other Strengths And Weaknesses: The paper is well written and relatively easy to read for a theoretical one. As most theoretical papers, the contributions are mainly proofs that have little impact on the practice. The domain of interest (manifold learning with spectral embedding) is a niche (compared to DR with NE, e.g.). These minor shortcomings do not decrease the value of the contribution for a slected audience, most probably. Other Comments Or Suggestions: / Questions For Authors: Can you refine your assesment LTSA? Same behavior but really no link? LLE not using explicitly a Laplacian but does it implicitly, as shown later. Possible to come to a similar result? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewers for the constructive reviews! Geometric Data Analysis (GDA) is a small area, and your attention to it is appreciated. Here we respond to the main points raised by all reviewers. Is the result suprising and new in its particular area? Our sharp threshold result is actually surprising, because in the noiseless case, when samples are _on_ a manifold, the degradation in the eigenvectors is gradual, with no threshold. We show that if noise is added to the samples, the degradation is catastrophic at a predictable threshold. This was not expected from existing theory. Moreover, this threshold depends ONLY ON THE NOISE, not on other (unknown) properties of the manifold. This is also surprising, since so many other DiffMap properties depend on injectivity radius, reach, volume, etc. Third, it also suggests that *anisotropic noise* may paradoxically make the estimation of the eigenfunctions easier, due to widened spectral gaps. Finally, it adds to other indirect evidence that estimating a manifold by the eigenfunctions of the Laplacian, even though beautiful theoretically, may not be robust in practice. The authors plan to focus on LTSA in future work. [Note that embeddingless methods to estimate a manifold exist, their behavior is well studied, and _different_ from what we discover in this submission.] Practical implications: 1) Our result is an __impossibility results__. We show that estimating the Laplacian e-vectors has informational limitations, __even under strong assumptions__. An impossibility result directly impacts further analysis under weaker assumptions. 2) If noise can be estimated, then from our analysis, the threshold will be known, and we will know which e-vectors belong to the manifold. 3) The experiments on VAE suggest that the threshold phenomenon we discovered is more general, and this begs to be known. 4) UMAP the most popular of Neighbor Embedding methods uses the eigenvectors of the Laplacian to seed their embedding. Thus our result may be relevant to the UMAP users. More precisely, depending on how one uses UMAP, and how one avoids other unrelated artefacts of this heuristic, the threshold may become relevant or not. We will include comments 1) and 2) in the final version of the paper to clarify the implications of our study. We are aware this work is of a somewhat "niche field", and we thank the reviewer who expressed this fact. Indeed, Neighbor Embeddings are vastly more common. This reality will color the decision on this paper, whether it's explicitly stated or not. We understand that ICML may make the strategic decision to favor papers based on their area. When such a decision is made, it is not just affecting the authors, it is also implicitly telling the people who might read this paper that ICML the flaghips ML conference has no place for GDA (for example). More to the point, the theoretical properties we uncover are broadly relevant. As we have already discussed, our results have immediate importance for popular algorithms such as UMAP, and empirically we see similar threshold phenomena may affect other non-Laplacian embeddings. Thus our work is not only a result regarding one specific spectral analysis of a dataset, but a new informative perspective connecting denoising and dimension reduction. We now focus on reviewer specific points. We believe our work has many practical implications, and we will dwell on its implications for DM embeddings in particular. The sharp threshold phenomenon indicated by our results gives clear guidance to practitioners about the limitations of Laplacian embeddings. In practice, one may argue that this is already well understood, as low dimensional embeddings are typically preferred. The benefit of our results is that they provide new rationale for why this is essential, and moreso, how one may select a cut-off. In future research, techniques to analyze the structure of the noise may be developed to estimate exact cut-offs. Further, our theory suggests that ``uninformative" eigenfunctions catastrophically deviate from the underlying manifold structure, thus standard techniques to promote parsimony are effective at removing this uninformative information. Still dwelling on this point, not all papers that consider Laplacian embeddings advocate for low-dimensional bases. For example, in [Green, Tibshirani, 2021], infinite dimensional bases are employed for non-parametric regression. Our work provides an interesting complementary perspective on this. For signals that have little dependence on the noise, our results indicate that there is very little benefit to exceeding the "noise threshold". This opens up many questions about how this can be leveraged to improve learning rates. It is an excellent point raised about more refined results being possible for LTSA, as a connection to the Laplacian exists as seen in works such as [Ting, Jordan, 2018]. We plan to pursue this in future research.
Summary: This work brings of the interesting Sasaki metric into manifold learning and Laplacian Enginmap, and prosed theoretical analysis results to support low frequency eigen recovery under noise constrains defined in abstract. Strengths: the high-level idea of introducing Sasaki metric, with both "tangent" and "normal" concept, or as "horizontally" and "vertically", and with informative recovery threshold, which is linked with noise amplitude. Experimental results on both synthetic data and real data, are presented to support Theorem 3.1, and include results from LTSA (ref. 3) and VAE (ref. 2), which are not Laplacian decomposition-based methods. References: 1. R. R. Coifman and S. Lafon, “Diffusion maps”, Applied and Computational Harmonic Analysis, 2006 2. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” 2022. 3. Z. Zhang and H. Zha, “Principal manifolds and nonlinear dimensionality reduction via tangent space alignment,” SIAM, 2004. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I looked Theorem 3.1, Theorem 3.3, and appendix section B and C, though not very carefully for examination. Experimental Designs Or Analyses: Laplacian and non-Laplacian based methods on both synthetic and real data are included in section 4 & 5 & appendix, all look reasonable. Supplementary Material: Yes, I read through supplementary section A and B for Sasaki metric and Laplacian, and section E and F for more experiments details. Relation To Broader Scientific Literature: NA Essential References Not Discussed: The following paper is one of the foundation works in the area of manifold learning, in particular for Laplacian methods, feel should be cited. M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15:1373–1396, June 2003 Other Strengths And Weaknesses: As seems related, manifold intrinsic dimensionality estimation, not explicit discussed in this work right, and feel will be interesting to see discussions in this line. For real data with noise, often even the rough intrinsic dimensionality is challenging, and consider low frequency eigen recovery with theoretical analysis support, then guess perhaps under certain conditions dimensionality estimation can be more reliable. Other Comments Or Suggestions: Questions: not sure if I missed this, the noise scale "r" is defined in the abstract and mentioned section 2 too, though, it seems not see the clear definition or guess mostly as section 2.1 (the part refer to ref. 6, 19) as is. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewers for the constructive reviews! Geometric Data Analysis (GDA) is a small area, and your attention to it is appreciated. Here we respond to the main points raised by all reviewers. Is the result suprising and new in its particular area? Our sharp threshold result is actually surprising, because in the noiseless case, when samples are _on_ a manifold, the degradation in the eigenvectors is gradual, with no threshold. We show that if noise is added to the samples, the degradation is catastrophic at a predictable threshold. This was not expected from existing theory. Moreover, this threshold depends ONLY ON THE NOISE, not on other (unknown) properties of the manifold. This is also surprising, since so many other DiffMap properties depend on injectivity radius, reach, volume, etc. Third, it also suggests that *anisotropic noise* may paradoxically make the estimation of the eigenfunctions easier, due to widened spectral gaps. Finally, it adds to other indirect evidence that estimating a manifold by the eigenfunctions of the Laplacian, even though beautiful theoretically, may not be robust in practice. The authors plan to focus on LTSA in future work. [Note that embeddingless methods to estimate a manifold exist, their behavior is well studied, and _different_ from what we discover in this submission.] Practical implications: 1) Our result is an __impossibility results__. We show that estimating the Laplacian e-vectors has informational limitations, __even under strong assumptions__. An impossibility result directly impacts further analysis under weaker assumptions. 2) If noise can be estimated, then from our analysis, the threshold will be known, and we will know which e-vectors belong to the manifold. 3) The experiments on VAE suggest that the threshold phenomenon we discovered is more general, and this begs to be known. 4) UMAP the most popular of Neighbor Embedding methods uses the eigenvectors of the Laplacian to seed their embedding. Thus our result may be relevant to the UMAP users. More precisely, depending on how one uses UMAP, and how one avoids other unrelated artefacts of this heuristic, the threshold may become relevant or not. We will include comments 1) and 2) in the final version of the paper to clarify the implications of our study. We are aware this work is of a somewhat "niche field", and we thank the reviewer who expressed this fact. Indeed, Neighbor Embeddings are vastly more common. This reality will color the decision on this paper, whether it's explicitly stated or not. We understand that ICML may make the strategic decision to favor papers based on their area. When such a decision is made, it is not just affecting the authors, it is also implicitly telling the people who might read this paper that ICML the flaghips ML conference has no place for GDA (for example). More to the point, the theoretical properties we uncover are broadly relevant. As we have already discussed, our results have immediate importance for popular algorithms such as UMAP, and empirically we see similar threshold phenomena may affect other non-Laplacian embeddings. Thus our work is not only a result regarding one specific spectral analysis of a dataset, but a new informative perspective connecting denoising and dimension reduction. We now focus on reviewer specific points. On dimension estimation. Thanks for bringing up this point. [Little, Maggioni, IEEE 2009] present a dimension estimation in noise algorithm. They show that when performing local PCA at a well chosen radius the singular values have the largest eigengap at $d$ equal to intrinsic dimension. In other words, the noise and geometric eigenvalues separate. Rather than directly estimating the dimension, it is also worth considering another perspective suggested by our paper. Dimension can be somewhat misleading. As seen in our analysis, high-dimensional manifolds can act as if they are low-dimensional if analyzed at an appropriate scale. Thus quantities such as covering numbers, or as popularly used in Laplacian estimation, median distance between datapoints, become more relevant in assessing the complexity of the data. If one was to assume that their samples were indeed generated from a tubular neighborhood, a simple estimator for intrinsic dimension could be the rate at which a random subsample covers the whole dataset when a relatively small number of subsamples is selected. As a dual perspective, for a generic dataset where one may not even adopt this strong tubular assumption, one may look at these same covering numbers and use them to assess an "intrinsic dimension" of the data. On the citation of Belkin and Niyogi, we agree that this is more than appropriate to reference in our paper, and we will gladly include it. Regarding $r$, as mentioned by the reviewer, this is defined in section 2, however we can more clearly indicate that it is the radius of the noise where appropriate.
Summary: the authors provide a theoretical analysis that shows that Laplacian eigenfunctions capture the geometry of the underlying manifold, without needing the noise amplitude or dimension to vary with the same size. The main technique leverages the so called Sasaki metric in Riemannian geometry. They conduct experiments and also observe similar behavior in other non Laplacian based dimension reduction methods. Claims And Evidence: yes, the claims are generally supported. The mathematics is, to the best of my ability, sound. The experiments, while not exhaustive, do support the claims of the authors. Methods And Evaluation Criteria: yes. The experiments demonstrated the cutoff phenomenon as predicted by the theory. Theoretical Claims: I only checked the main statements of the theorems in the main text. Those theorems appear sound to me. Experimental Designs Or Analyses: Yes, I checked the soundess of the experimental design. They are fine. Supplementary Material: No. Relation To Broader Scientific Literature: This is a theory paper that provides advancement on the theoretical analysis of popular algorithms that are broadly used in scientific applications. Essential References Not Discussed: How does the current work related to prior work on L-infinity convergence? SPECTRAL CONVERGENCE OF GRAPH LAPLACIAN AND HEAT KERNEL RECONSTRUCTION IN L∞ FROM RANDOM SAMPLES by dunson, wu and wu Other Strengths And Weaknesses: The main weakness of this paper is motivation and readability. Since most ML readers are likely not familiar with the Sasaki metric, I suggest the authors spend more time and text to introduce the metric and how it differs from traditional ones and provide more intuition. Other Comments Or Suggestions: / Questions For Authors: / Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewers for the constructive reviews! Geometric Data Analysis (GDA) is a small area, and your attention to it is appreciated. Here we respond to the main points raised by all reviewers. Is the result suprising and new in its particular area? Our sharp threshold result is actually surprising, because in the noiseless case, when samples are _on_ a manifold, the degradation in the eigenvectors is gradual, with no threshold. We show that if noise is added to the samples, the degradation is catastrophic at a predictable threshold. This was not expected from existing theory. Moreover, this threshold depends ONLY ON THE NOISE, not on other (unknown) properties of the manifold. This is also surprising, since so many other DiffMap properties depend on injectivity radius, reach, volume, etc. Third, it also suggests that *anisotropic noise* may paradoxically make the estimation of the eigenfunctions easier, due to widened spectral gaps. Finally, it adds to other indirect evidence that estimating a manifold by the eigenfunctions of the Laplacian, even though beautiful theoretically, may not be robust in practice. The authors plan to focus on LTSA in future work. [Note that embeddingless methods to estimate a manifold exist, their behavior is well studied, and _different_ from what we discover in this submission.] Practical implications: 1) Our result is an __impossibility results__. We show that estimating the Laplacian e-vectors has informational limitations, __even under strong assumptions__. An impossibility result directly impacts further analysis under weaker assumptions. 2) If noise can be estimated, then from our analysis, the threshold will be known, and we will know which e-vectors belong to the manifold. 3) The experiments on VAE suggest that the threshold phenomenon we discovered is more general, and this begs to be known. 4) UMAP the most popular of Neighbor Embedding methods uses the eigenvectors of the Laplacian to seed their embedding. Thus our result may be relevant to the UMAP users. More precisely, depending on how one uses UMAP, and how one avoids other unrelated artefacts of this heuristic, the threshold may become relevant or not. We will include comments 1) and 2) in the final version of the paper to clarify the implications of our study. We are aware this work is of a somewhat "niche field", and we thank the reviewer who expressed this fact. Indeed, Neighbor Embeddings are vastly more common. This reality will color the decision on this paper, whether it's explicitly stated or not. We understand that ICML may make the strategic decision to favor papers based on their area. When such a decision is made, it is not just affecting the authors, it is also implicitly telling the people who might read this paper that ICML the flaghips ML conference has no place for GDA (for example). More to the point, the theoretical properties we uncover are broadly relevant. As we have already discussed, our results have immediate importance for popular algorithms such as UMAP, and empirically we see similar threshold phenomena may affect other non-Laplacian embeddings. Thus our work is not only a result regarding one specific spectral analysis of a dataset, but a new informative perspective connecting denoising and dimension reduction. We now focus on reviewer specific points. The convergence result by Dunson, Wu, Wu has several overlaps with our work. That paper establishes guarantees for Laplacian estimation through empirical samples. As we establish properties of the continuum limit of these estimators, these results can be combined to assess the quality by which noisy samples from a manifold can be used to recontstruct intrinsic Laplacian data, although not without caveats: 1. Our main results establish $L^2(\mu)$ approximation guarantees, differing from the $L^\infty$ result of DWW. One potential path to allign these is to extend our analysis to an appropriate RKHS topology, namely a high order sobolev space, from which a convergence result would then imply the desired sup-norm convergence. 2. The results of DWW are not technically applicable to our setting. They assume data to be drawn from a closed manifold, a condition tube manifolds violate. Showing that their result hold in our setting is non-trivial, although it can most likely be addressed using standard techniques. 3. The rate of convergence would also be of key interest. We expect this to be a multi-scale phenomenon similar to the spectral growth we analyze in our paper. We expect an initial estimation rate on the order of the intrinsic dimension of the data, as has been shown in other results which relate Laplacian approximation to covering and packing numbers. Lastly, we thank the reviewer for their constructive feedback on the readability of the paper. We can adjust our presentation to emphasize the relationship between Sasaki and induced metrics in the main text rather than the appendix.
null
null
null
null
null
null
null
null
Best of Both Worlds: Advantages of Hybrid Graph Sequence Models
Accept (poster)
Summary: This paper introduces GSM++, a hybrid graph sequence model that combines Mamba (RNN) and Transformer architectures for graph learning. It leverages Hierarchical Affinity Clustering (HAC) for efficient graph tokenization and Hierarchical Positional Encoding (HPE) to enhance structural representations. Experimental results show GSM++ outperforms existing models on local and global graph tasks. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, it makes sense to use Accuracy to evaluate the classification performance. Theoretical Claims: Yes, the theoretical claims about the effectiveness are correct. Experimental Designs Or Analyses: The experiments and the result analysis are sound and extensive. Supplementary Material: Yes, I reviewed the additional results and the related work investigation in the supplementary material. Relation To Broader Scientific Literature: This work proposes a novel tokenization based on the Hierarchical Affinity Clustering(HAC) tree. This idea is quite novel. Essential References Not Discussed: In the text, the authors mentioned the over-smoothing issue in GNNs, and one recent study [1] on this issue is missing in the discussion. [1] "SkipNode: On alleviating performance degradation for deep graph convolutional networks." IEEE Transactions on Knowledge and Data Engineering (2024). Other Strengths And Weaknesses: **One More Weakness**: It would be better to provide the source code to improve the reproducibility of the experimental findings in this work. Other Comments Or Suggestions: None Questions For Authors: Please refer to the issues mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you so much for your time and constructive review. We are also glad that the reviewer has found our work novel and effective. > *Missing study* Thank you for bringing this relevant paper to our attention. We will make sure to properly discuss this paper in the final version of our submission. > *Reproducibility and Code* Following your suggestion, we will provide the details of hyperparameters and will open source the code upon acceptance of the paper to further enhance the reproducibility of the experimental findings. ---- ---- ---- In the next part, following the guideline, we reply to `Reviewer wtrm`'s comment. > *Broad applicability* We want to kindly bring to your consideration that we have performed experimental results on 18 different datasets ranging 5 different benchmarks and diverse tasks ranging from node classification, graph classification/regression, shortest path, cycle check, triangle counting, etc. We kindly bring to your consideration that performing experiments in such diverse tasks has not often happened in recent studies. For example, the papers mentioned by the reviewer (both published in ICLR 2025), only consider a subset of our baselines, tasks, and datasets. > *For datasets from LRGB* Following your suggestion, we follow this study and tune hyperparameters for baselines and our model and will report the results in the final version of the paper. However, please note that: (1) Even using the reported results in this paper, our model outperforms all the baselines with a significant margin. Therefore, the main message of the paper will remain the same and won’t be changed. (2) using hyperparameter tuning will also improve the performance of our models, leading to even a bigger gap with baselines. > *Complexity of the model* Please note that as mentioned above, HAC is a very scalable algorithm and one-time procedure. Our overall time complexity remains the same as other graph sequence models in theory, but in practice, GSM++ uses less memory and is faster. Please see our response to `Reviewer pYLA.` > *The use of the hybrid model* Please note that we have provided theoretical justification that why SSMs suffer from representation collapse and why the use of full transformers can mitigate this. Therefore, we believe we fully support our claims and choices. We are not aware of an existing study that combines virtual nodes with SSMs to prevent this issue. If there is, we would be happy to discuss and compare. Please note that there are definitely more interesting ideas that might or might not work, which could be an interesting future work, but out of the scope of this paper. By local structural bias, we mean understanding the structure around each node.
Summary: The paper introduces a general Graph Sequence Model (GSM) framework aimed at systematically studying graph-based learning methods utilizing sequence models. It identifies core limitations in existing approaches, notably their inability to simultaneously capture local structures and long-range dependencies efficiently. To address these issues, the authors propose GSM++, a hybrid sequence model combining Transformers and State-Space Models (SSMs). GSM++ incorporates a novel tokenization strategy based on Hierarchical Affinity Clustering (HAC), using hierarchical clustering to produce meaningful node orderings. Additionally, a "Mixture of Tokenization" (MoT) is proposed to select the most suitable tokenization per node. The authors claim superior empirical performance and present theoretical arguments regarding the advantages of hybrid architectures. Claims And Evidence: Several claims, particularly those asserting the general superiority of GSM++ across a broad range of tasks, appear overstated. While GSM++ shows good empirical results on certain benchmarks, these results do not provide sufficient evidence of broad applicability, and crucial comparative evaluations are lacking. For instance, 1) the performance gain attributed to hybridization is not uniformly substantial, and it is not clear whether the gain comes from using additional parameters. 2) The number of parameters for GSM++ is not given, it is not clear whether the improved performance is brought by using larger models. 3) Some recent but highly relevant baselines are not compared to, such as NeuralWalker [Chen'25] and GRASS [Liao'25]. 4) For datasets from LRGB, the authors don't seem to follow the suggestions provided by [Tonshoff'23], leading to significantly worse performance than state-of-the-art methods. Methods And Evaluation Criteria: The proposed hierarchical tokenization (HAC) method, while theoretically appealing, lacks rigorous experimental justification for its necessity over simpler alternatives, such as simple DFS or random walks. The evaluation primarily relies on synthetic and standard datasets, and there is insufficient discussion of real-world applicability or scalability. For instance, random walk-based tokenization used in NeuralWalker [Chen'25] seems to work better in most benchmark tasks. The use of the hybrid model combining SSMs and transformers is motivated by the representational collapse of SSMs. However, this argument does not justify the choice of using a transformer. In fact, some cheaper layers seem to work as well as a transformer, e.g., a virtual node layer [Rosenbluth'24]. Theoretical Claims: The theoretical results presented, although mathematically sound, often hinge on assumptions or idealized settings that may not translate effectively into practical scenarios. The provided theoretical insights, such as representational collapse and sensitivity analysis, are insightful but remain somewhat detached from practical implications. Experimental Designs Or Analyses: The experimental section has significant shortcomings. In addition to the issues that I mentioned in "Claims and evidence", there are some additional weaknesses that I will list here. 1) The paper lacks thorough ablation studies that isolate the benefits of individual components comprehensively. 2) The performance gains shown in many benchmarks are marginal, calling into question the practical significance of the proposed approach. 3) There is also no explicit discussion and evaluation about the scalability of GSM++. It is not clear whether GSM++ is scalable to very large graphs with millions of nodes. Supplementary Material: The supplemental material provides additional theoretical proofs and some experimental details. However, no details about the baseline methods' and the proposed model's configurations and hyperparameters are given, raising potential reproducibility and fair comparison issues. Relation To Broader Scientific Literature: While the paper acknowledges relevant Transformer and recurrent neural network literature, it insufficiently relates its contributions to recent graph Transformer variations, random walk-based models, or hierarchical methods beyond HAC. For instance, random walk-based models are less discussed compared to other subgraph based models. There is a growing line of research using sequence models to encode random walks represented as sequences, such as CraWL and [Chen'25]. However, the presented paper doesn't properly relate its contributions to this line of research. Essential References Not Discussed: Critical works on random walk-based models using sequence models are missing, such as NeuralWalker [Chen'25]. Works on hierarchical graph pooling methods, including DiffPool [Ying'18] and other pooling methods, are notably absent. Some recent baseline methods are not discussed and compared to, such as NeuralWalker [Chen'25] and GRASS [Liao'25]. [Chen'25]: Learning Long Range Dependencies on Graphs via Random Walks, ICLR 2025. [Liao'25]: Greener GRASS: Enhancing GNNs with Encoding, Rewiring, and Attention, ICLR 2025. [Ying'18]: Hierarchical Graph Representation Learning with Differentiable Pooling, NeurIPS 2018. [Tonshoff'23]: Where Did the Gap Go? Reassessing the Long-Range Graph Benchmark, TMLR 2024. [Rosenbluth'24]: Distinguished in uniform: Self-attention vs. virtual nodes, ICLR 2024. Other Strengths And Weaknesses: ### Strengths: - Provides an interesting theoretical angle on hybrid model performance. - Introduces a potentially useful framework (GSM). ### Weaknesses: - Limited novelty due to incremental combinations of existing techniques. - Insufficient empirical validation. - Potential reproducibility issues due to missing details about model hyperparameters. - Lack of direct comparison with simpler, computationally cheaper methods. Lack of comparison with some recent but highly relevant baselines. - Potential scalability issues. Other Comments Or Suggestions: - A clearer discussion of computational complexity, particularly regarding scalability and practical runtime costs of HAC, is necessary. - L22: could the authors elaborate on the meaning of "local structural bias"? I don't think it's commonly known concept. - Figure 1 and 2 should be included in the main paper. Typos (not extensive as there are a lot of small typos): - L14: **the** tendency - L45: promising potential -> potential - L96: types **of** sequence models - L103: this advantages -> this advantage - L105: so the -> so that the - L97: Let $G=(V, E)$, be a graph -> Let $G=(V, E)$ be a graph - L282: can results -> can result - L410: GSM++ with **the** state-of-the-art methods Questions For Authors: 1. How does the HAC-based tokenization perform in terms of scalability and runtime efficiency on large-scale real-world graph datasets? And how about the full model? 2. Can the authors empirically demonstrate the necessity and benefits of their hybrid model over purely Transformer-based or purely recurrent models more rigorously? How does it compare to other combinations such as SSM + Virtual node? 3. How does GSM++ compare to recent highly relevant baseline methods, such as NeuralWalker [Chen'25] and GRASS [Liao'25]? 4. Can the authors reevaluate GSM++ on LRGB using the better practice from [Tonshoff'23]? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you so much for your time and constructive review. > *Broad applicability* Please see the response in our message to `Reviewer psEP`. > *The performance gain attributed to hybridization* Please note that we follow the original benchmarks. We use 100K for MNIST and CIFAR, while using 500K parameters for all other cases. Therefore, the performance gains are not a result of more parameters. Also, our ablation study in Table 4 shows the importance of hybrid models +1.3% performance gain, which is quite non-trivial in such benchmarks. The most important aspects of using hybrid models are in (1) efficiency: Please see our efficiency response to `Reviewer pYLA`. GSM++ achieves the best performance while maintaining the best efficiency; and (2) graph understanding tasks, where hybrid models improve the performance by +1.5% on average. > *Some recent baselines* We would be happy to discuss these papers in our final version. For the purposes of evaluating the existing paper, we wish to note that the source code for the GRASS paper was made public **after the ICML submission deadline**. Furthermore, both of the mentioned studies are recently published by ICLR 2025 (Jan 22 one day before ICML deadline). Therefore, we did not have the opportunity to make a formal comparison in our paper submission. In addition, these works consider different goals than our own. While our focus has been on introducing new alternative graph learning architectures, GRASS proposes techniques (e.g., rewiring) aimed at enhancing existing models. Such techniques could also be applied to our architecture and may further improve its performance. Re-using the results in their original paper is not a fair comparison as they did not follow the standard range and use more than 500K parameters. Comparing their results with ours, even in the case that they use considerably more parameters, our GSM++ performs on par. For final version of the paper, we would be happy to include an extensive discussion about these papers. Due to space limit, we cannot include the results for an extensive list of baselines. But we have focused on a subset of SOTA models that have shown to outperform others. Please note that it's been uncommon to compare with all the baselines and usually a subset are chosen. The two studies mentioned by the reviewer also each compare with only one random-walk based model (we have Crawl and GMN, which is more than both) > *Choice of HAC* In Fig 3, we have already justified the choice of HAC rigorously, and compared it with 4 different existing tokenizations, including random-walk. Also, our CraWL and GMN baselines are both random-walk based models. Also please see our provided scalability results in our response to `Reviewer pYLA`. The HAC algorithm is scalable to graphs with billions of nodes in less than an hour (see their original algorithm paper). Therefore, the real world applicability of our approach is very promising. > *Theoretical work hinge on idealized settings* While the problems considered in our theoretical settings are more precisely defined and consistent than real-world noisy data, we would like to call attention to the fact that our theoretical results prove tight bounds for nearly identical tasks as the ones we investigate empirically. If there are any particular theoretical assumptions that trouble the reviewer, we are happy to have an extended conversation about why we made those assumptions and the feasibility of loosening them. > *Ablation Study* Our comprehensive ablation study in Table 4 isolates the benefits of individual components. We also go further and add our contributions to existing frameworks like GPS and NAGPhormer and show even our contributions can improve other frameworks performance. > *The performance gains shown in many benchmarks are marginal* We kindly bring to your attention that GSM++ achieves +1.9 performance gain on average and consistently outperforms SOTA baselines, which is not marginal and shows its applicability. > *Scalability and large graphs* Please see our response to `Reviewer pYLA` for scalability of HAC and the results on large graphs. > *Fair Comparison* We have re-used the original reported results by the paper. Also, for our own configuration we have followed the original configuration of the benchmarks. > *Novelty and Contribution* Please note that in this paper, we present: (1) a unifying framework, (2) a novel tokenization, (3) a novel mixture tokenization technique, (4) a new positional encoding, (5) a new hybrid usage of sequence models, and (6) a rigorous theoretical analysis of graph sequence models. We ask that you consider the extent and novelty of our contributions in comparison to other recently published studies, which often include only a small share of these components. > *Complexity of the model* > *The use of the hybrid model* > *For datasets from LRGB* Please see the response in our message to `Reviewer psEP`. --- Rebuttal Comment 1.1: Comment: I thank the authors for their effort in addressing my concerns. While they have addressed some of the clarity issues, most of my concerns still remain. Please find below my further comments: > Due to space limit, we cannot include the results for an extensive list of baselines. But we have focused on a subset of SOTA models that have shown to outperform others. If so, the claims regarding state-of-the-art performance in this paper appear misleading. While the authors claim that their model is compared to state-of-the-art GTs and GNN models and outperforms them in 8/10 cases, this comparison is limited to a selective subset of baselines. I believe that SOTA performance should indicate competitiveness with all existing methods, not merely improvement over a selective subset of baselines that may be possibly weak. A true state-of-the-art performance statement requires demonstrating superiority or at least parity against any current approaches in the field. > GSM++ achieves +1.9 performance gain on average Again, this gain was obtained when compared to **a selective subset of baselines**, and on small synthetic datasets such as MNIST, CIFAR10, PATTERN, but not larger or real-world datasets such as LRGB or OGBN datasets. More importantly, **their datasets also seem to follow some selective process**, but do not include all datasets from each benchmark. For example, the ZINC and CLUSTER dataset is excluded from the benchmarking GNNs datasets without any justification. Some other datasets (peptides-struct, PCQM-contact, Questions, etc.) are also excluded from the other two benchmarks. > Scalability I thank the authors for their additional results on the large ogbn-arxiv and ogbn-product datasets. However, again, the authors seem to select a subset of relatively weak baselines to compare with. For instance, the Polynormer [Deng'24] (despite being cited by the current work) has achieved 73.46 and 83.82 on arxiv and product, respectively. While one key contribution of this work is about scalability, the authors do not provide sufficient evidence about why their model should be preferred to these other GTs, such as Polynormer, designed for addressing very large graphs. Moreover, why are the results for GOAT in your table lower than the results reported in their original paper (71.96 vs 72.41 on arxiv)? > Even using the reported results [Tonshoff'23], the main message of the paper will remain the same. The main conclusion from [Tonshoff'23] is that GNNs with proper hyperparameter tuning can achieve comparable performance to SOTA GTs such as GPS on some datasets, which is opposed to the conclusion drawn in the original LRGB paper. Without a comprehensive tuning for baselines (while the tuning is much more extensive for the proposed model), I don't think the authors can make these conclusions. Moreover, the results of GPS from [Tonshoff'23] do significantly outperform the model proposed by the current work (e.g. 0.3884 (GPS) vs 0.3789 (GSM++) on COCO-SP, 0.4440 (GPS) vs 0.4128 (GSM++) on Pascalvoc-SP). I do think it is important to follow the good practices from [Tonshoff'23] to make any significant conclusions on the datasets from LRGB. > We are not aware of an existing study that combines virtual nodes with SSMs to prevent this issue. NeuralWalker [Chen'25] combines SSM and Virtual node. [Rosenbluth'24] demonstrated that using virtual nodes achieves similar or even better performance than transformers in many cases, as referenced in my original review. > Necessity of Hybrid model As acknowledged by the authors, the presented framework is complicated, with many different components. I believe the necessity of many proposed components needs to be more carefully studied. Particularly in the hybrid model, it's not clear why a transformer is required, while it is known to suffer from scalability issues. A more comprehensive study would be to compare a GSM with only SSM, only transformer, SSM+transformer, SSM+virtual node layer. > Discussion of Hierarchical representation learning method, such as DiffPool The authors overlooked my comments about the relationship between the proposed hierarchical positional encoding and hierarchical representation learning methods such as DiffPool. ### References [Deng'24] Polynormer: Polynomial expressive graph transformer in linear time. ICLR 2024. --- Reply to Comment 1.1.1: Comment: **Thank you for your time and engaging with us in the discussion phase.** > **SOTA Claim and Baseline:** Please note that we **have not** claimed to be state-of-the-art in the paper and it is not our primary motivation. We are concerned that the reviewer has misunderstand the main goals of our study and so we summarize the goals, claims and their supporting results: - Transformers has been one of the critical backbones for graph learning (with more than hundreds of studies in the past few years). However, they suffer from quadratic time complexity. Therefore, **as one of the alternative approaches**, researchers have focused on using sub-quadratic sequence models. But it remains unanswered that what is the (dis)advantages of Transformers and sub-quadratic models in graph tasks. We provide a theoretical results to show that Transformers are limited in tasks that requires inductive bias about the order of nodes, while recurrent models are capable of performing such tasks efficiently. On the other hand, recurrent models suffer from representational collapse, while Transformers are permutation equivariant and can mitigate representation collapse. Therefore, we suggest hybrid models and both theoretically and empirically show that they are better. - In addition to theoretical results, we present a novel tokenization (HAC), mixture of tokenization technique, new positional encoding, and new architecture (hybrid model that to the best of our knowledge is novel) for graphs. We provide theoretical and empirical results to support them. - Showing that hybrid models are the best alternative to pure Transformers (compared to virtual nodes) is not our main motivation and is not mentioned in the paper. It would be an interesting future work to compare such models. Also, once again, please note that your suggestion to compare with GRASS and NeuralWalker is not aligned with ICML guideline as their peer-review process was done **one day before the ICML deadline** and the code for GRASS was made public **AFTER** the deadline. We have compared with the best peer-reviewed model available at the time of submission and would be happy to discuss these two works in the final version. To make sure that we fully address your concerns, we provide a comparison of these models with updated GSM++ with random-walk tokenization in the MoT and same #parameters: | Model | Peptides-func | PascalVOC-SP | COCO-SP | MNIST | PATTERN |-|-|-|-|-|- | GRASS | 67.37 | **56.70** | 47.52 | 98.93 | 89.17 | NeuralWalker | 70.96 | 49.12 | 43.98 | 98.76 | 86.97 | GSM++ | **71.82** | 49.33 | **48.25** | **98.99** | **90.08** > **Additional Datasets:** Following your suggestion, we provide the results on the requested datasets: | Model | ZINC $\downarrow$ | peptides-struct $\downarrow$ | CLUSTER $\uparrow$ |-|-|-|- | GRASS | **0.047** | 0.2459 | 79.54 | NeuralWalker | 0.065 | 0.2463 | 78.18 | GPS | 0.07 | 0.2509 | 78.01 | GSM++ | 0.049 | **0.2451** | **80.09** GSM++ outperforms baselines, and the main messages of the paper remain unchanged. We hope these results fully address your concern. > **Scalability:** Please note that these results are presented for scalability, which we outperform baselines. It is very common in the literature that a model does not outperform all baselines in all datasets (e.g., Polynormer outperforms NeuralWalker in half of the heterophilic datasets). > **LRGB Results:** Please note that we didn't extensively tune hyperparameters for GSM++. A fair comparison requires the same training procedure for all models. As we mentioned in our previous response: (1) Even using the reported results in this paper, our model outperforms all the baselines. **Therefore, the main message of the paper will remain unchanged**. (2) Using hyperparameter tuning will also improve the performance of our models, leading to even a bigger gap with baselines. We report the improved results of both baselines and GSM++ with hyperparameter tuning in the final version. > **Necessity of Transformers:** We respectfully disagree with the reviewer that our model has many components. Our model has the same number of components as other GTs and even has less components than NeuralWalker and GRASS, mentioned by the reviewer. Please see line 204 to end of page 4 in the paper for our discussion on the necessity of Transformers in our design (avoiding representation collapse). > **Discussion on DiffPool:** Following your suggestion, we will discuss hierarchical pooling methods in the final version. --- **We kindly ask you to please consider the extent, novelty of our contributions, and evaluations in comparison to other recently published studies. We use 16 datasets and 26 baselines to evaluate our models, which is more than most recent studies. We provide comprehensive ablation studies to show the effect of each component. We present extensive theoretical results that even go beyond graph learning tasks/architectures.**
Summary: This paper transforms graph data into sequential data through tokenization, global encoding, and local encoding, and applies GSM for graph learning. The paper also analyzes the strengths and weaknesses of different sequence models in handling various tasks. Furthermore, they enhance the model by proposing GSM++, which generates ordered sequences using the HAC algorithm and combines recursive models (e.g., Mamba) with Transformer models, balancing efficiency and sequential information preservation. Additionally, the author introduces MoT, allowing each node to select the most suitable tokenization method based on task requirements. Extensive experiments validate the effectiveness of different tokenization methods and sequence models, demonstrating the superior performance of GSM++ on multiple benchmark datasets. ## update after review The feeback has fulfill my concerns. I raised my score. Claims And Evidence: Yes, but with limitations. While the paper is supported by extensive experimental results and theoretical proofs, the ablation studies fail to sufficiently isolate and quantify the individual impacts of key contributions (HAC and MoT). This creates uncertainty about their true significance in achieving the reported performance. Methods And Evaluation Criteria: The methods generally make sense but have scalability concerns. The proposed subgraph partitioning method is practical for handling local/global graph properties, but the reliance on DFS/BFS for graph construction raises scalability issues for larger graphs due to their slow performance. Theoretical Claims: Yes. Experimental Designs Or Analyses: Similar to "Claims And Evidence", the experimental designs are generally sound but lack crucial ablation analysis. While the extensive experimental results demonstrate effectiveness, the failure to properly isolate the impacts of hybrid method/HAC/MoT through ablation studies weakens the validity of causal claims. Supplementary Material: Yes. I read the whole parts. Relation To Broader Scientific Literature: The hierarchical global-local partitioning approach shows limited novelty as similar strategies have been explored in prior research. The integration of recursive models with Transformers builds effectively on existing work regarding combining sequential processing and global attention mechanisms. Essential References Not Discussed: No. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: The concurrent introduction of GSM and GSM++ risks diluting the paper's focus. Consolidating these advancements into a single cohesive framework would strengthen the presentation. Questions For Authors: 1. Could you provide detailed runtime metrics for each experiment/task? 2. Have you considered combining MPNN (local processing) with Transformer (global processing) given their comparable performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you so much for your time and constructive review. > *The ablation studies fail to sufficiently isolate and quantify the individual impacts of key contributions (HAC and MoT).* We kindly want to bring to your consideration that the ablation for both HAC and MoT are already reported in the submission. In Table 4 (last line), while keeping other parts unchanged, we replace HAC tokenization with standard tokenization of GTs (i.e., node tokenization) and shows that HAC has the most contribution to the performance of GSM++ (please see lines 425 - 431). Regarding MoT, we have ablation studies in Table 3, 7, and 8 (last line of each), where we have GSM++ (MoT) variant, in which the only change is to replace the tokenization part with MoT. It improves the performance on all datasets. We also go further and even perform ablation on other existing architectures of GPS and NAGPhormer frameworks and while keeping other parts unchanged, we replace their tokenization with HAC/MoT and show that these changes improve their performance (Table 4 rows 3, 4, 7 and 8). In all these ablations, we isolate the individual impact of MoT and HAC and so we believe these experiments not only support the importance of HAC/MoT in our framework, but it shows their significance even when utilized in other frameworks. > *Scalability and concerns about BFS and DFS.* While we understand the concerns about computational cost, the HAC tree algorithm is a well-established clustering method known for its high parallelizability and efficiency, which is capable of scaling to graphs with billions of nodes and trillions of edges in less than an hour! Also, this clustering algorithm, similar to other positional encodings, is a one-time computation and can be done as the preprocessing step, ensuring that it does not impact runtime efficiency during model training. Following your suggestion and to fully address your concern, we compare the construction time of HAC with common positional encodings: | Graph Size | 10K | 20K | 30K| 40K | 60K | |-----------|-----------|-----------|-----------|-----------|-----------| | LapPE | 25 | 94 | 197 | 286 | 333 | | Random Walk | 103 | 217 | 329 | 447 | 709 | | HAC | 105 | 230 | 314 | 345 | 368 | Please also note that our general approach can scale to larger graphs with millions of nodes, while most baselines can scale to at most tens or hundreds of thousand nodes. Here we report the results on large graphs of arXiv-ogbn and products-ogbn (We will add the full results in the final version of the paper): GSM++ achieves the best result while maintaining the best efficiency. In addition to GPS, several other baselines like Grit and GMN face OOM issues. | | NAGPhormer | Exphormer | GOAT | GSM++ | |-----------|-----------|-----------|-----------|-----------| | arxiv | 70.13 | 72.28 | 71.96 | 72.61 | | Time | 5.96 | 2.15 | 8.16 | 1.95 | | product | 73.29 | OOM | 82.00 | 82.13 | | Time | 12.08 | OOM | 29.50 | 11.87 | > *Novelty and Contribution* In addition to our novel graph learning method of GSM++ (where it achieves the best performance while maintaining the efficiency), one of the important aspects of our study is its novel theoretical contributions that help better understand sequence models in a wide range of graph tasks. We comprehensively discuss how different types of architectures can be useful for graph tasks and support our claims with both theoretical and experimental results. > *Introduction of both GSM and GSM++* We appreciate the author highlighting ambiguity between GSM and GSM++. We intend to clarify in a final version of the paper that GSM is the name of our framework that unifies graph sequence models, while GSM++ is one of its variants supported based on our theoretical and practical findings. Therefore, we believe that GSM++ model complements all the discussions and intuitions that we build upon the GSM framework.
null
null
null
null
null
null
null
null
Trajectory Inference with Smooth Schrödinger Bridges
Accept (poster)
Summary: The authors proposed the Schrodinger Bridge (SB) with smooth priors guided by the Gaussian process to reduce the exponential cost in K for solving multi-marginal SB problems. The problem is first discretized (suffers from exponential cost) and then lifted to high dimensions and solved by belief propagation methods, resulting in polynomial costs. Claims And Evidence: The paper is a bit hard to read. The multi-marginal SB problem is 1) first discretized into discrete space in sec 2; 2) augmented with a new continuous variable and was related to a sequential conditional structure via the Gauss-Markov property; 3) was solved using a linear Belief Propagation (BP); 4) a practical (BP). Overall, I think the method is quite hard to understand, especially when I am not an expert in the Gaussian process and have no knowledge in belief propagation. Methods And Evaluation Criteria: NA Theoretical Claims: Didn't check. Experimental Designs Or Analyses: I don't know much about the usage of multi-marginal SB or the applications in RNA-Seq. Supplementary Material: Didn't check Relation To Broader Scientific Literature: This paper aims to alleviate the exponential cost when it comes to many marginals. Essential References Not Discussed: NA Other Strengths And Weaknesses: I am a bit confused by the usage of first considering the discrete state space and then uplifting the problem to a higher dimension and even augmenting with some continuous variables. It seems like we don't need the discretization step. As such, the exponential cost does not exist actually when we don't consider the discrete-type problems. Other Comments Or Suggestions: The first paragraph in section 3 is quite important. I suggest the authors make a diagram of the flow of methods to facilitate the reading. Questions For Authors: 1) limitation: its trajectories inherit the roughness of Brownian paths, leading to noisier estimators and less interpretable posterior paths. This problem can be easily solved by the probability flow ODE, why cannot we use the prob flow ODE to avoid the Brownian motion? 2) Can we tackle the continuous multi-marginal SB directly without tackling the discrete optimal transport? So many tools have been used, which may affect the adoption of this method. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for these comments. **Comment:** Can we use the probability flow ODE to avoid Brownian motion? **Response:** While probability flow ODEs are indeed powerful tools, they are not ideal for our specific goals for several reasons: 1. Inferring Individual Trajectories: - Probability flow ODEs focus on evolving the entire distribution and don't preserve individual particle identities or trajectories - Our method explicitly tracks individual particles, which is crucial for understanding the motion of individual particles (e.g., in N-body systems) 2. Statistical Framework: - The smooth Schrödinger bridge has an elegant quasi-Bayesian interpretation: the Schrödinger bridge matches the observed marginal distributions while keeping particle trajectories as faithful as possible to the prior $R$ - Probability flow ODEs lack such statistical guarantees when used for smoothing 3. Other Advantages of our Method: - Our method naturally provides velocity and acceleration estimates. These emerge directly from the HMM model we use - Post-hoc smoothing via ODEs would require additional assumptions and computation This makes our approach more suitable for applications requiring detailed trajectory analysis and closer adherence to the prior distribution. **Comment:** Can we tackle the continuous multi-marginal SB directly without tackling the discrete optimal transport? **Response:** The goal of this paper is to infer particle trajectories that are as close to the prior path distribution as possible while matching the observed empirical distributions. Empirical distributions are discrete by nature, so a discrete multi-marginal Schrödinger bridge problem is a natural starting point. However, our algorithm can be extended to continuous marginals (population distribution) by using the following approach: 1. Replace the messages $\beta$ and $\gamma$ with their continuous versions 2. Replace sums with integrals in (8) on the right-hand side of the equation This continuous extension of our algorithm yields scaling functions $\beta^*_0, ..., \beta^*_K$ that correspond to the multi-marginal Sinkhorn algorithm for continuous distributions. Crucially, this modification preserves the computational efficiency of the discrete version, reducing the exponential complexity of multi-marginal Sinkhorn to polynomial time. Furthermore, similar to how we represent $\delta$'s using orthonormal bases in Section 5, the continuous $\beta$ functions can also be expressed through basis expansion, maintaining both mathematical rigor and computational tractability. **Comment:** I suggest the authors make a diagram of the flow of methods to facilitate the reading. **Response:** We appreciate this suggestion and will add a diagram to the revised version of the paper. The diagram will illustrate the flow of methods, including the relationship between the multi-marginal Schrödinger bridge problem, the static problem, and the lifted space. This will help clarify the overall structure and flow of our approach for readers.
Summary: **Disclaimer: Despite the forthcoming criticisms, I find this paper intriguing and recommend its acceptance.** ------------------------------ This paper introduces a class of smooth Gaussian processes as priors for Schrödinger bridges and designs efficient algorithms for their computation. A key insight is that, while these processes are inherently non-Markovian—potentially leading to exponential-time complexity in terms of the number of marginals—the proposed class can be transformed into a **Markovian** form. This transformation enables the use of efficient message-passing algorithms. The authors validated their methods through simulations and experiments on low-dimensional real-world problems. ## update after rebuttal I would like to thank the authors for their detailed rebuttal. As stated previously, I find this paper interesting and continue to advocate for its acceptance. Claims And Evidence: ### Unsupported Claims Several claims in the paper lack sufficient justification: 1. **Motivation for Using Smooth Priors** The core motivation of the paper is to replace the standard Brownian motion-driven prior with a smooth one. While this idea seems reasonable, it is not thoroughly justified. Many trajectory inference problems indeed require smooth trajectories, but the authors should provide concrete examples illustrating when and why this is beneficial. Furthermore, the authors should explicitly state that their RNA-seq experiments are only toy versions, as they are reduced to dimensions 2 and 5 (a fact buried in the appendix). In reality, biological processes often exhibit strong microscopic randomness, making it unclear why smooth trajectories are preferable in this context. 2. **Continuity of Processes Relative to the Prior** A key missing question is: *What processes are absolutely continuous relative to the prior introduced in this paper?* This is crucial because, even if the prior is smooth, the minimizers of the Schrödinger bridge (SB) problem might still concentrate on non-smooth trajectories. If this is the case, it challenges the authors’ claim that they are performing *smooth* trajectory inference. 3. **Static vs. Dynamic Schrödinger Bridges** The authors appear to focus only on the **static** Schrödinger bridge, which corresponds to multi-marginal optimal transport with entropy regularization. While it is well known that the **values** of the static and dynamic SB problems coincide, recovering the latter from the former requires an additional flow-matching-style training step, which can be computationally expensive. This raises concerns about whether the comparisons in the experimental section are fair, especially regarding diffusion-based methods that train dynamic SBs directly. 4. **A Fundamental Limitation: Essentially a One-Dimensional Schrödinger Bridge** The most significant limitation of the proposed framework is that it **essentially reduces to a one-dimensional Schrödinger bridge**. The theory does not extend to high-dimensional Gaussian processes with dependent coordinates, which is a major drawback for real-world applications. This limitation likely explains why the experiments are restricted to problems of at most dimension 5. Methods And Evaluation Criteria: The experiments are primarily conducted on synthetic data and low-dimensional settings. The efficiency of the proposed algorithm in high-dimensional scenarios remains uncertain. Theoretical Claims: The theoretical claims are sound and interesting. Experimental Designs Or Analyses: The experimental design seems flawed; see my criticisms in **Claims And Evidence** and **Methods And Evaluation Criteria**. Supplementary Material: I have read all the supplementary material. Relation To Broader Scientific Literature: The key contribution of this paper is the introduction of a novel class of smooth priors for the Schrödinger bridge problem, which, to the best of my assessment, is both original and valuable. Essential References Not Discussed: The references are appropriately cited. Other Strengths And Weaknesses: I have outlined several criticisms in **Claims and Evidence**. However, I believe this paper makes an interesting contribution due to the following strengths: - **Novelty of the setting** – As the authors point out, introducing a smooth prior for the Schrödinger bridge problem is a novel and underexplored direction. - **Simplicity of the algorithm** – The proposed algorithm is a straightforward modification of existing methods for Sinkhorn-type problems, making it both easy to understand and implement. - **Clarity of presentation** – The paper is well-written and effectively communicates its ideas. Other Comments Or Suggestions: There is a "Table ??" at the bottom of p.19, line 1043. Questions For Authors: Please see **Claims And Evidence**. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for these insightful comments. **Comment:** Which processes are absolutely continuous with respect to the GAP? Is the minimizer of the SB problem with GAP prior smooth? **Response:** When $R$ is a smooth prior, the solution to (1) is also smooth, in the same sense. To be more specific, a sample path of an order $m$ GAP is $m-1$-times differentiable with probabiliy 1. In other words, there is an event $A \subseteq \Omega$ such that each $\omega \in A$ lies in $C^{m-1}$ and $R(\Omega \setminus A) = 0$. Since the solution $P$ to (1) is necessarily absolutely continuous with respect to $R$, we must have that $P(\Omega \setminus A) = 0$ as well, so that $P$ is concentrated on $C^{m-1}$ paths. We will clarify this. Alternatively, the explicit form of the solution to the smooth SB problem given in Theorem 4.1 shows that the optimal $P$ is a mixture of Gaussian processes, which inherit the smoothness of $R$. **Comment**: Recovering the dynamic SB solution from the static one requires an additional flow-matching-style training step, which can be computationally expensive. **Response:** We would like to draw a distinction between two senses of "recovering" the dynamic SB, focusing for a moment on the standard SB. The *solutions* of the static and dyanmic SB problems actually coincide (by convolving the solution to the static problem with a Brownian bridge, see Leonard 2014). This is the argument we use in the proof of Lemma 2.1. In this sense, the dynamic SB is "recovered" directly from the static problem, and in particular, this allows easy sampling from the dynamic SB. However, the reviewer is correct that recovering the dynamic SB *as a stochastic process* (e.g., recovering the drifts in the corresponding SDE) is not so direct, though it can still be accomplished from the static problem (as in Pooladian & Niles-Weed, 2024). In our work, we recover a smooth dynamic SB in the first sense (with very reasonable computational complexity): given the solution to the static problem, we "recover" the corresponding dynamic solution in the sense that we can easily sample from it (using standard techniques in GP regression). In particular, we can sample from intermediate, unobserved times. (See our "leave-one-out" experiments in Tables 2 and 3, which shows that this leads to accurate reconstruction on synthetic data.) It is less clear how to recover it in the second sense, since the smooth dynamic SB is not given by the solution to an SDE. We agree that it would be interesting to pursue this direction in future work. **Comment:** What is the practical motivation for using smooth priors? **Response:** Our motivation is twofold: 1. In many systems, the only physically relevant models involve particles whose velocities vary continuously, i.e., whose paths are at least $C^1$. This is never the case with a BM prior. As our experiments show (e.g., Figure 4), this is particularly relevant when trajectories of two particles cross and separate: the BM prior easily loses the identity of particles between timesteps when two trajectories intersect, but assuming that the velocities are continuous allows the particles to be identified. 2. Existing work on trajectory inference largely focuses on smooth trajectories (see [F&S](https://proceedings.mlr.press/v130/chewi21a.html), [WLR](https://arxiv.org/abs/2405.19679) as examples), not for physical but for essentially *statistical* reasons. Like classic cubic splines, at a heuristic level, fitting data with smooth paths helps to ameliorate overfitting and ensure interpretability of the resulting estimates. Making this intuition rigorous in a particular statistical setting is a very interesting open problem. **Comment:** The most significant limitation of the proposed framework is that it essentially reduces to a one-dimensional Schrödinger bridge. The theory does not extend to high-dimensional Gaussian processes with dependent coordinates. **Response:** The fact that the theory does not extend to processes with dependent coordinates is not quite true. First, in Theorem 2.3, the covariance matrix $\sigma$ need not be diagonal; this allows the coordinates of the white-noise process defining the GAP to be correlated. More generally, if we consider a large covariance matrix $\mathbb{R}^{md \times md}$ that describes the covariance among all entries of hidden variables across dimensions, it is not necessary that this matrix be block diagonal (corresponding to independent coordinates). From an algorithmic perspective, this only changes the value of $\Gamma$ (see equations 19 and 20); the algorithm need not be changed. However, we agree that from a computational perspective, the use of independent coordinates is beneficial, since the tensor $\Gamma$ factors. This greatly reduces the space complexity of storing $\Gamma$, and simplifies some message passing updates. We also point the reviewer to our general comments above on dimension dependence.
Summary: The authors proposed a novel method to learn smooth trajectories in an SB problem, extending the usual SB method to allow non-Markovian reference by lifting it to phase space, that also effectively extending momentum SB. The method is accompanied with an approximated belief propagation algorithm. Claims And Evidence: The idea is well supported with theoretical results and empirical claim of algorithm is support by experimental evidences. The only part being a bit hard to follow is that whether marginals are the continuous, population version or the discrete, sample version in theories. It might be helpful to spell that out more explicitly when using them. Methods And Evaluation Criteria: The evaluation criteria makes sense for empirical tests. The only potential complain I have is that the dimension tested are relatively low. Theoretical Claims: I tried to check the proofs and did not find obvious errors. However Lemma 2.1 only has proof sketch and I want to see a fully spelled out proof for completeness. Especially spell out the assumptions on $\mu_k$, which appears to be continuous in the sketch but said to be general in lemma statement. In fact I would like the authors to double check if there is any missing assumptions for the marginals across the paper. This is the main reason I put the current score and I am more than happy to change if assumptions are better spelled out. Experimental Designs Or Analyses: I checked all the experiment and did not find obvious issues. Supplementary Material: I checked proofs (C) and additional experiments (F.5). Relation To Broader Scientific Literature: The method extend momentum SB that are SoTA for smooth trajectory inference and can use quite general non-Markovian references. I think the latter is more important if it can be served as a foundation for more generalization to missing dimension problems. Essential References Not Discussed: Not I am aware of immediately. Other Strengths And Weaknesses: Strengths - The HMM view is very useful and I think it has potential to be extended to other setups beyond smoothness. Weaknesses: - I am unsure how scalable the algorithm to dimension is. Single cell data can have much higher dimensions. Even if not, having an algorithm work in several dimensions can be useful for a range of applications still. Other Comments Or Suggestions: - There are missing cross references at then end of page 19 and caption of Table 16. - I would like to have a full proof instead of proof sketch of Lemma 2.1, mostly to be sure about assumptions went in. Questions For Authors: These questions might be out of the scope of the current paper but I am curious 1) it appears to me that the lifted SB problem having Markovian reference is key for the algorithm. It might be viewed as a missing data problem that we did not observe any higher order derivatives (which is momentum for momentum SB problem). Does the algorithm translate to general missing data problem that 1) some dimension are missing and 2) if they are observed the prior should be Markovian? Or is it also essential to have Gaussianity in GAP? 2) Is it true that an inferred trajectory necessarily pass one particle at each time point? This seems inherent from lemma 2.1. I understand this corresponding to interpolating empirical distributions. I wonder however, is this a desired property for all applications, e.g., in single cell examples/point cloud task one loses particle identity by killing the cell, so each (latent) trajectory is evaluated once and the chance a latent trajectory pass two observational data is zero, but in application e.g., individual particle tracking and N body problems one loses identity by permutation and we kept the same set of particles and we indeed want trajectories to pass exact one particle each snapshot. Is there an assumption that each time snapshots has same number of particles? 3) Related to 2), Lemma 2.1 (maybe also 3.1?) seems to assumed $\mu_k$ to be absolutely continuous, I wonder from when on the theories start to use empirical measures instead of population version? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to N7xY We thank the reviewer for these insightful comments. **Comment:** What is the assumptions on the marginals in Lemma 2.1 (and 3.1)? In fact I would like the authors to double check if there is any missing assumptions for the marginals across the paper. **Response:** Thank you for allowing us to clarify this important point, which we'll make clearer in the revision. A complete proof of Lemma 2.1 is now included in the appendix. Lemma 2.1 consists of two parts: 1. The first part demonstrates that the multi-marginal Schrödinger bridge problem with finite marginal constraints (1) can be reduced to a finite-dimensional KL divergence minimization problem (2). This reduction requires no assumptions about the marginals. 2. The second part addresses a technical challenge: we want to define a multi-marginal Schrödinger bridge problem with the prior $R$ being a GAP and the marginals being discrete. However, the KL divergence in the optimization problem is ill-defined when $R$ has absolutely continuous marginals and the marginals are discrete. We address this by proving that problem (2) is equivalent to formulation (3) *if* the marginals have densities. Since (3) is well defined even for discrete marginals, we take (3) as the **definition** of the multi-marginal Schrodinger Bridge for general marginal distributions. This is in fact a standard approach in statistical analysis of the Schrodinger Bridge problem, see, e.g., Pooladian & Niles-Weed (2024). The equivalence between (2) and (3) uses two assumptions: * The marginals $\mu_k$ are absolutely continuous with respect to the Lebesgue measure, with finite entropy $\int \mu_k(x) \log \mu_k(x) dx$ * The joint marginals of the prior $R$ are absolutely continuous In the remainder of the paper, we adopt the setting described in the beginning of Section 2.2 where the marginals are sampled data (i.e., $\mu_k$ is discrete). In particular, this is true of Lemma 3.1. Note that, before the lemma, we defined $p(x, y)$ to refer to a "mixed" discrete-continuous density, where $x$ coordinates are discrete but $y$ coordinates are continuous. We will emphasize this point in the revision. **Comment:** Is the Markovian assumption in the GAP prior essential? How about the Gaussianity? **Response:** The GAP itself is not Markov; however, the Markovian structure of the **lifted** process is crucial for our algorithm because: 1. It enables factorization of the joint distribution into pairwise terms, allowing efficient message passing. 2. It ensures the solution $p(z)$ of (6) is Markovian (by Theorem 4.1), enabling efficient trajectory sampling. Gaussianity in the GAP prior provides computational efficiency through closed-form message passing updates but isn't essential. The method can be generalized to work with any process that can be lifted to a Markov process on the phase space by replacing the potential functions between $\eta_k$ and $\eta_{k+1}$ with the conditional density $r(\eta_{k+1} | \eta_k)$ of the desired prior process. **Comment:** Does the algorithm translate to general missing data problem? **Response:** **Yes**. As the reviewer mentioned, not observing higher-order derivatives is a form of missing data handled by our algorithm. Other forms of missing data can be incorporated by modifying the factor nodes $\alpha_k$ in the graphical model (Figure 3). Currently, these nodes are indicators enforcing that only positions $\omega_k$ are observed. By modifying these factor nodes, one can incorporate other types of missingness in the observations. We plan to pursue this in future work. **Comment:** Is it true that an inferred trajectory necessarily pass one particle at each time point? And does is makes sense in the context of single cell example? Is there an assumption that each time snapshots has same number of particles? **Response:** Yes, in the current formulation, inferred trajectories must pass through exactly one particle at each observed time point, due to the marginal constraints. For point cloud matching applications, while tracking individual "particles" may not be physically meaningful, our algorithm remains valuable for inferring collective motion patterns: 1. We can infer velocity and acceleration fields that characterize point cloud dynamics 2. We can interpolate the point cloud distribution at unobserved times by sampling from the solution Our results in Table 2 show that this approach yields good results. Relaxing the requirement that trajectories pass through observed points can be achieved by modifying the factor nodes $\alpha_k$ to model observational noise. This allows inferred trajectories to deviate from exact observed positions. We've added this extension as a remark in the revision, and plan to pursue it in follow-up work. Our method does not require the same number of particles across different time snapshots. Both the theoretical framework and algorithm support arbitrary discrete marginal distributions. --- Rebuttal Comment 1.1: Comment: I really appreciate the authors' response! I have raised my score. - Especially the clarification on discrete vs continuous marginals. I asked this mainly because estimating the bridge between two continuous marginals while only having samples from them is subtly different from estimating the bridge between the two corresponding empirical measures e.g. to get the first you do not have to have trajectories pass at least one particle each marginal and some of the half-bridge based methods do not require that. Please make sure to have them clarified in the revision on which assumptions are used in each result. - Thank you for the answer to the missing data generalization. The reason I raised this question to start with is that I was not very convinced that in scRNA-seq applications we want smoothness given the intrinsic noise in scRNA-seq data is from single-molecule chemical reactions and is inherent. However, missing data is common, e.g. scRNA-seq cannot measure protein level but in reality, protein is the one that actually regulates RNA. E.g. a model widely accepted by biologists in [1] (section *BoolODE: converting Boolean models to ODEs.*) has protein components in it that are not observable. The only work I know tried to deal with this is [2] (app. C.4) but used a quite different idea, parametric models and the goal is not quite trajectory inference. So I am glad the authors' work could lead to a solution. I look forward to the authors' follow-up work :). But please make some comments on this potential as it might attract biologists. Again, thank the authors for this nice piece of work, well done. [1] Pratapa, Aditya, Amogh P. Jalihal, Jeffrey N. Law, Aditya Bharadwaj, and T. M. Murali. "Benchmarking algorithms for gene regulatory network inference from single-cell transcriptomic data." Nature methods 17, no. 2 (2020): 147-154. [2] Berlinghieri, Renato, Yunyi Shen, and Tamara Broderick. "Beyond Schrödinger Bridges: A Least-Squares Approach for Learning Stochastic Dynamics with Unknown Volatility." In 7th Symposium on Advances in Approximate Bayesian Inference -- Workshop Track. --- Reply to Comment 1.1.1: Comment: Thank you very much for the helpful references---this seems like a quite promising direction! We definitely plan to comment on this in the revision, and are very grateful for the pointers.
Summary: The paper proposes solving multi-marginal Schrödinger bridges w.r.t. a reference process based on autoregressive Gaussian process. Interpretation to phase space is constructed, which leads to a tractable algorithm based on probabilistic graphical models and belief propagation. Claims And Evidence: - In the end of Sec 2.2 and Footnote 1, it's a bit unclear why "smooth" paths corresponds to stationary processes. As mentioned by the authors before Sec 3, there are a few prior works that encourage smoothness through non-stationary processes. Overall, I think this paper presents an interesting theoretical finding yet falls short on the experiment sides. Methods And Evaluation Criteria: Y Theoretical Claims: Y Experimental Designs Or Analyses: - Experiment results are in rather low dimension (<= 5). Dimension reduction is performed on some sc-RNAseq datasets, e.g., Embryoid Body was initially reported in MIOFlow with dimension 200. How does the complexity of the proposed method scales in terms of dimension? Supplementary Material: Y Relation To Broader Scientific Literature: Numerical methods for solving trajectory inference problems could be beneficial, as these problems are widespread across various scientific domains. Essential References Not Discussed: N Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Typo in L268: Sec 3 should be Fig 3 Questions For Authors: Can the authors comment on the relation to https://arxiv.org/pdf/2006.14113? This prior work also introduced a belief-propagation based method for solving multi-marginal OT. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # General remarks We thank the reviewers for their helpful comments and questions. We are gratified that the reviewers found our method "interesting" (XtBd), "novel," "very useful" (N7xY), and theoretically "sound" (okGt). We would like to address an important issue that arose in several reviews: **dimension dependence**. First, we agree that we should highlight more explicitly that our experiments were conducted on low-dimensional data. We plan to include this fact prominently in our revision. Second, the reviewers are correct that the dependence on dimension is poor. The running time scales quadratically with $M$, the number of coefficients used in the approximation algorithm (see Theorem 6.1). As mentioned in Section 6, $M$ will typically be exponential in the dimension, limiting our approach to relatively low dimensional problems. However, we offer several arguments supporting the interest of our method: 1. A low-dimensional algorithm is still valuable in applications. For example, in the fluid dynamics and astronomical object tracking examples mentioned in the introduction, the data lies in dimension 2 or 3. Our method makes an important contribution to these inherently low-dimensional problems. Based on our experiments, smooth SBs are now the leading method for N-body tracking problems in 3 dimensions. 2. Existing approaches often fail, even in low dimension. In the 5-dimensional Dyngen dataset, our algorithm (Figure 5) provides significantly more reasonable results than current leading approaches (F&S, Figure 7; MIO, Figure 8). Another algorithm, DMSB, fails to even converge (Table 2). This indicates the low-dimensional setting is far from solved. 3. A small number of coefficients go a long way. In our experiments on the 5-dimensional Dyngen dataset, we use 1024 coefficients (only 4 per dimension), yet our approach still brings substantial benefits. We have also conducted experiments on 10-dimensional data using 1024 coefficients. Our method outperforms the other methods on this data in a leave-one-out task. We will include this new, higher-dimensional experiment in our revision. 4. It is common to assume that high dimensional data has low intrinsic dimension. Many approaches to trajectory inference begin with non-linear dimension reduction. For example, the pipline developed by Schiebinger et al. (2019) begins by reducing dimension to 30. While this is still larger than we can handle, "working up" from low-dimensional examples can yield biologically relevant insights as computational power improves. # Response to XtBd **Comment:** In the end of Sec 2.2 and Footnote 1, it's a bit unclear why "smooth" paths corresponds to stationary processes. **Response** The footnote about stationary processes has been removed. To clarify, we've added a new lemma in the appendix proving that any Markov Gaussian process with differentiable paths is essentially trivial: Let $\omega: t \mapsto \omega(t)$ be a real-valued Gaussian process that is Markovian and a.s. differentiable. For arbitrary $t \ge 0$ such that $\mathrm{Var}(\omega(t)) > 0$, we have $\omega(s) = \mathbb{E}[\omega(s)|\omega(t)]$ for all $s \ge 0$ a.s. In other words, any such Gaussian process is essentially deterministic, indicating that obtaining smooth paths requires moving beyond Markov processes. **Comment:** Relation to [2006.14113]? **Response:** We appreciate the reviewer mentioning this important work, which we failed to cite though we cited its companion works (''Learning hidden markov models from aggregate observations'' and ''Multimarginal optimal transport with a tree-structured cost and the Schrödinger bridge problem"), which have the same authors. In [2006.14113], the authors develop a belief propagation algorithm for multi-marginal optimal transport. The use of belief propagation for multi-marginal OT has a long history (Teh & Welling, 2001), and like [2006.14113], we use this connection for an efficient algorithm. However, we believe we're the first to exploit this connection to develop an algorithm for the Schrodinger Bridge problem with a smooth prior. Our Algorithm 2 is indeed analogous to Algorithm 2 in [2006.14113], tailored to our setting: 1. The potential functions are: - Between $\eta_{k-1}$ and $\eta_k$: Given by $\Phi_k$ (defined in Section 4) - Between $\eta_k$ and $\omega_k$: Given by the indicator function $\mathbf{1}_{\{z_k^{(0)} = x_k\}}$ 2. Our message passing variables ($\delta^-$, $\delta^+$, $\beta$, $\gamma$) correspond to their factor-to-variable messages $m_{\alpha \rightarrow j} 3. Our operators $\mathcal{L}$, $\mathcal{I}$, $\mathcal{R}$ implement the operations in equations (38a,b) An important technical difference is that since our $\eta_k$ variables are continuous, the operators $\mathcal{L}$, $\mathcal{I}$, $\mathcal{R}$ cannot be directly implemented. This fact requires us to develop an approximation scheme, which gives rise to our practical implementation detailed in Section 5.
null
null
null
null
null
null
A Variational Information Theoretic Approach to Out-of-Distribution Detection
Accept (poster)
Summary: The paper introduces a variational and information bottleneck-informed framework for performing feature shaping for out-of-distribution detection. Namely, the proposed objective maximizes the KL distribution between the distribution of ID features $Z$ and OOD features $\tilde{Z}$ subject to an information bottleneck regularization which seeks to maximize the mutual information between $Z$ and the ID / OOD label $Y$ while minimizing the mutual information between $Z$ and $Z$. The work show qualitatively that the feature-shaping approaches proposed by past work are similar to their objective under the assumption of independent features and different choices for the distribution of the OOD features (e.g., Gaussian with a different mean from ID, Inverse Gaussian, etc.) and regularization coefficient on the information bottleneck term. Then, the work considers a general piecewise linear shaping function for feature shaping and shows that fitting this function to validation OOD data outperforms previously proposed feature shaping OOD detection methods on test OOD data. Claims And Evidence: 1. It's not clear to me how the proposed theory predicts the generalized piecewise linear feature shaping function proposed later in the work, as mentioned in the abstract. 2. Where does the claim "Note this forms a Markov Chain $Y\rightarrowX\rightarrowZ\rightarrow\tilde{Z}" (line 137-138) come from? It seems more like an assumption the authors are making (a reasonable one to me), but might be better written that way. 3. Other claims seem reasonable. Methods And Evaluation Criteria: The ImageNet-1k and CIFAR-10/100 OOD detection benchmarks make sense for comparing OOD detection methods. The authors also test 4 different pretrained models encompassing convolution- and transformer-based architectures. Theoretical Claims: I skimmed but did not carefully check the algebra for the Gaussian example and gradient derivations. Experimental Designs Or Analyses: Experimental design seems sound. The qualitative analysis comparing the mean of $\tilde{z}$ under the proposed objective with different assumptions vs. existing feature shaping methods is quite interesting, but ultimately seems to be an approximate relationship based on the shape of the resulting curves; for instance, the actual functions are not the same if you look at the x and y axes' values. Supplementary Material: I skimmed through the entire appendix. Relation To Broader Scientific Literature: The paper proposes a unifying framework to think about feature shaping methods for OOD detection, enabling more critical examination of existing and new methods in this space. Essential References Not Discussed: To my knowledge, the essential references are discussed. Other Strengths And Weaknesses: Strengths: 1. The paper provides a unifying and principled framework to view many disparate recent methods in OOD detection. 2. The analysis from the framework leads to some observations about the properties that seem to work well for feature shaping. Weaknesses: 1. The link between the settings discussed in the framework and existing methods is qualitative, as the curves match only in approximate shape. Other Comments Or Suggestions: One way to strengthen this connection mentioned in weaknesses above would be to choose assumption values that would result in functions that are close in distance, not just in shape. Questions For Authors: My primary questions are: 1. Can the authors explain what they say that the proposed theory predicts a new shaping function that can outperform existing ones? And explain the evidence for the claim? 2. Could the authors update Figure 2 to show curves that would closely match those in Figure 3 if overlaid using the same axes? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *Question 1: It's not clear to me how the proposed theory predicts the generalized piecewise linear feature shaping function proposed later in the work, as mentioned in the abstract.* **Response 1:** See Response 2 to Reviewer p7gJ. *Question 2: Where does the claim "Note this forms a Markov Chain $Y\rightarrowX\rightarrowZ\rightarrow\tilde{Z}" (line 137-138) come from? It seems more like an assumption the authors are making (a reasonable one to me), but might be better written that way.* **Response 2:** The reviewer is correct, it is an assumption (not a claim) that Y (ID or OOD class) produces data/image X from which the feature Z is computed, and then from the feature Z, the OOD feature Zt is computed. The reason for spelling this out is to use the Information Bottleneck, which requires this setup. We will add a comment in the manuscript. *Question 3: Experimental design seems sound. The qualitative analysis comparing the mean of under the proposed objective with different assumptions vs. existing feature shaping methods is quite interesting, but ultimately seems to be an approximate relationship based on the shape of the resulting curves; for instance, the actual functions are not the same if you look at the x and y axes' values.* **Response 3:** Yes, the reviewer is correct – the relation between our shaping functions and SOA is qualitative in shape and approximate as discussed in Section 4 (e.g. Line 294 – that under Gaussian OOD the approach is similar to clipping in ReAct; Line 324 – that under Laplacian OOD a positively sloped region for intermediate values is similar to VRA, etc). It isn’t our goal to precisely numerically match SOA methods – as many are heuristically derived and there is no reason to believe that the optimal solution based on principles will precisely match empirically driven heuristics. We aim to understand whether SOA practice, though heuristic-driven has similarities of what an optimal principled approach would predict (hence justifying and understanding these heuristics by showing that the heuristics have traits of a principled approach), but this will not be a precise match in numerical values. Our work is interested in qualitative understanding of SOA such as, why/when should one clip values of the feature (like ReAct)?, why/when is it a good idea to suppress small values (like ASH)? We believe the answers to these questions give intuition and insights that will help researchers design new methods. Having said that, the primary reason that our shaping functions are not on the same scale as current SOA shaping functions is because current SOA’s scale is chosen to fit the energy score function (this is mentioned in the footnote of page 6), while our theory does not involve the score since our goal is to understand the feature making minimal assumptions. It is nevertheless interesting to see that our shapes (in Fig 2), which resemble SOA, do not depend on choice of score, even though SOA has made the energy score assumption and tuned according to that score. This indicates the shapes to be a universal property regardless of score. Note to match scales in practice with the energy score as we do in experiments, our piece-wise linear function family encompasses functions that also have similar scale values to SOA methods. *Question 4: Can the authors explain what they say that the proposed theory predicts a new shaping function that can outperform existing ones? And explain the evidence for the claim?* **Response 4:** The piece-wise linear family of shaping functions introduced are piece-wise linear approximations of the optimal shaping functions in Fig 2 (see Response 2 to Reviewer p7gJ). The results of the optimal piecewise linear function have out-performed SOA as shown in the Experiments Section 5. *Question 5: Could the authors update Figure 2 to show curves that would closely match those in Figure 3 if overlaid using the same axes?* **Response 5:** Please see Response 3. --- Rebuttal Comment 1.1: Comment: Thanks for the response. My question about motivating the specific piecewise linear function still stands, especially in light of reviewer p7gJ pointing out prior work which also fits a more expressive piecewise linear function. --- Reply to Comment 1.1.1: Comment: Please see Response 4 to Reviewer p7gJ. We would like to make two points: 1) As noted in the response to Reviewer p7gJ, compared to FS-Opt ([1] cited by Reviewer p7gJ), we are optimizing the loss over a more general class of functions (infinite dimensional functions) rather than restricting the optimization to piece-wise linear functions. In that sense our framework is actually considering a much larger class of shaping functions than [1]. The functions that optimize the losses both in [1] and our framework narrow down the class of functions to a particular form (because all of them are clearly not useful). In our case, the general set of functions is narrowed down to a much smaller a set of functions that are optimizers of our loss. These optimized functions (see Fig 2) can be approximated well with the particular piecewise family in Fig 4 (i.e., this piecewise linear family encompasses resulting optimal mean features under Gaussian, Laplace and IG with various hyperparameters alpha, beta and the distribution parameters). In the case of [1], the general class of piecewise linear functions is narrowed to the one with the expression given in Eqn 14 of their paper. 2) With no other constraints, a more expressive piecewise linear function family is not necessarily better in OOD detection performance as the more expressive piecewise linear family can be associated with distributions that may even be outside the class of realistic OOD distributions (note that our piecewise linear function family comes from distributions that have properties of empirical distributions observed from realistic OOD datasets – see Response 2 to Reviewer 4N3p). Therefore, with no other constraints, a more expressive family may not lead to better OOD detection performance.
Summary: ### Background - This paper works on the OOD detection task. - Previous methods use different feature shaping functions to reshape features from the penultimate layer of a pre-trained network. They achieve SoTA performance on OOD detection benchmarks, but lack in theoretical evidence and may not generalize to unseen datasets. ### Method - The authors develop a theory to formulate OOD features, and they propose a new loss functional consisting of two terms: “the first based on the KL divergence separates resulting ID and OOD feature distributions and the second term is the Information Bottleneck, which favors compressed features that retain the OOD information.” - Their theory can recover properties of existing feature shaping functions, based on different assumptions on OOD distributions. - They also design a new shaping function. ### Results - Their new shaping function outperforms previous methods on OOD detection benchmarks. Claims And Evidence: Yes Methods And Evaluation Criteria: - The proposed method is a piecewise linear shaping function, which contains 7 hyperparameters. According to the supplementary materials, the values of these hyperparameters are varying with different models on different benchmarks. For example, $y_{1b}$ is 0.73 when using ResNet-50 but 1.76 when ViT-L-16. How do the authors choose the values of hyperparameters? - I think the proposed method is not related with the theoretical analysis. I appologize if I miss something important. I expected that, the authors can "infer" some good shaping functions based on their theory. For example, maybe the theory can tell us what family of functions is good or help us to decide the values of hyperparemeters. But I don't see how the authors utilize their theory to design the function. It seems that the proposed function is mannually designed on different datasets. Theoretical Claims: Sorry that there are some parts I don’t quite understand. Could the authors provide more explanation? - How to understand OOD feature is $\tilde{Z}\sim p(\tilde{z}|z)$? Why does the OOD feature depend on feature $z$? What does this $p$ mean, the neural network or the shaping function? - Eq (4), how to understand "where \tilde{Z} is analogous to T and Z is analogous to X"? - I'm not sure if the proposed theory aligns with the real practice. Under different OOD distribution assumptions, the theory can recover some properties of previous methods, but normally, these methods are all good on ImageNet benchmarks. So, what is the real OOD distribution of the benchmark? Which method is the optimal one? Experimental Designs Or Analyses: The experiments are conducted on standard OOD detection benchmarks. The authors validate their method with different models and different datasets. Supplementary Material: I reviewed the supplementary material. Especially, I checked how the authors decide the values of hyperparameters. Relation To Broader Scientific Literature: Feature shaping methods achieve the SoTA performance in OOD detection, but lacks in theoretical analysis. This paper provides a new theory, which can recover some properties of previous shaping functions. I believe their paper can inspire more ideas in the future. Essential References Not Discussed: I think all related works are included and discussed. Other Strengths And Weaknesses: ### Strengths - Provide theoretical analysis, which is very interesting. The theory can recover properties of previous methods. - FS-OPT only considered the mean of ID and OOD features, while this work considers the distribution. - Their proposed method achieves better performance on different benchmarks. ### Weaknesses - Another major drawback of FS-OPT is the assumption of independence between features, so their framework cannot explain vector-based shaping functions like ASH. This paper also assumes independence of features, so I think it can be extended in the future. Other Comments Or Suggestions: Algorithm 1 looks very sparse. Maybe you can modify it somehow to make it more compact. Questions For Authors: I think the theoretical analysis is very interesting but the proposed method is confusing. I will raise my score if the authors can address my concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *Question 1: The proposed method is a piecewise linear shaping function, which contains 7 hyperparameters. According to the supplementary materials, the values of these hyperparameters are varying with different models on different benchmarks. For example, is 0.73 when using ResNet-50 but 1.76 when ViT-L-16. How do the authors choose the values of hyperparameters?* **Response 1:** See Experiments Section 5 Line 342-350 (2nd column). Yes, the optimal shaping function will depend on the specific architecture and its weights (hence the ID datasets) since each will have different statistical distributions – these can all be determined in training as only the ID dataset is needed. Note the hyper-parameters are fixed across all OOD datasets. *Question 2: I think the proposed method is not related with the theoretical analysis. I appologize if I miss something important. I expected that, the authors can "infer" some good shaping functions based on their theory. For example, maybe the theory can tell us what family of functions is good or help us to decide the values of hyperparemeters. But I don't see how the authors utilize their theory to design the function. It seems that the proposed function is mannually designed on different datasets.* **Response 2:** The reviewer’s expectation is correct our theory should predict a good shaping function, and it does. The piecewise linear family used in the experiments encompasses piecewise linear approximations of the resulting optimal shaping functions from the Gaussian, Laplacian and Inverse Gaussian OOD distributions, as discussed in Section 4 Lines 344-350. Notice that shaping functions for the Gaussian, Laplace, IG (Figure 2) all can be approximated well with a function in the family in Fig.4. So rather than tuning hyper-parameters for the given distribution and searching over the three distributions, we instead equivalently tune the hyper-parameters for the piecewise linear family – as this is more convenient practically. Note this not unlike previous literature (e.g., VRA) where hyper-parameter tuning of the shaping function is required. The proposed shaping function family is not manually designed on different datasets, see Response 1. *Question 3: How to understand OOD feature is ? Why does the OOD feature depend on feature ? What does this mean, the neural network or the shaping function?* **Response 3:** $p(\tilde z|z)$ is the conditional probability of the OOD feature ($\tilde z$) given the input standard NN feature (z) – see Lines 125-132. $p$ is a probability. Note that just as conventional deterministic shaping functions (e.g., f(z)) depend on the input feature z, so does our random feature; i.e., one needs to input the NN feature to determine the corresponding OOD feature. *Question 4: Eq (4), how to understand "where \tilde{Z} is analogous to T and Z is analogous to X"?* **Response 4:** In traditional IB (from Tishby et al 2000) – $T$ is a compressed representation of the data/feature $X$ that we want to solve for, and $Y$ is the random variable whose information is to be preserved from $X$. The traditional IB objective function is then $IB = I(X;T) – \beta I(T;Y)$. In our formulation, $\tilde Z$ is a compressed representation of the feature $Z$, and $Y$ is the information that we would like to preserve from $Z$. Since $\tilde Z$ is the compressed variable, it is analogous to $T$; and since $Z$ is the original feature/data, it is analogous to $X$. Our IB term is $IB = I(Z;\tilde Z) – \beta I(\tilde Z;Y)$. *Question 5: I'm not sure if the proposed theory aligns with the real practice. Under different OOD distribution assumptions, the theory can recover some properties of previous methods, but normally, these methods are all good on ImageNet benchmarks. So, what is the real OOD distribution of the benchmark? Which method is the optimal one?* **Response 5:** See Reviewer 4N3p Response 2. There is no “real” distribution of the benchmark as there are many datasets in the benchmark with different distributions. *Question 6: Algorithm 1 looks very sparse. Maybe you can modify it somehow to make it more compact.* **Response 6:** Thanks, will do. --- Rebuttal Comment 1.1: Comment: Thank you! Most of my concerns are solved, and thus, I raised my score to 3. But I'm still confused with the relationship between the theoretical analysis and the proposed method. I re-read the discussion in Section 4 Lines 344-350. It said: *"The above distributions all approximately fit in a piecewise linear function family, so in the experimental section we explore this as shaping functions."* In my understanding, the authors conduct experiments on the Gaussian, Laplacian and Inverse Gaussian OOD distributions, and they observe that all the distributions approximately fit a piecewise linear function family. And thus, they explore this as shaping functions. But I have two main concerns. First, a piecewise linear function can fit any functions, as long as there are enough number of “pieces”. [1] already uses a 100-piece function and optimizes it for OOD detection. What is the advantage of the proposed piecewise linear function over theirs? Second, more importantly, OOD distributions are typically unknown in practice, and they may not fit in any of "the Gaussian, Laplacian and Inverse Gaussian OOD distributions". How do the authors make sure the real distribution fit in the designed piecewise linear function. [1] Towards optimal feature-shaping methods for out-of-distribution detection. ICLR 2024. ---------------------- Update 04 Apr, Thank the authors for the detailed response! Most of my concerns have been addressed. --- Reply to Comment 1.1.1: Comment: *Thank you! Most of my concerns are solved, and thus, I raised my score to 3* **Response 1:** Thanks. *But I'm still confused with the …."* **Response 2:** Apologies – after reading the sentence in L344-345, we think that the way it was written may have led to confusion: it should be “The above distributions *result in optimal shaping functions* that all approximately fit …” Thank you for pointing this out, we will change as follows: “The above distributions result in optimal shaping functions that all approximately fit in a piecewise linear function family as shown in Figure 4, so in the experimental section, we explore this family as shaping functions.” *In my understanding, the authors conduct experiments on ...* **Response 3:** We would like to clarify that our method is not approximating OOD probability distributions using piecewise linear functions. Instead, our approach fits a very specific class of piecewise linear functions to the *feature shaping functions* that emerge from the optimization process (see Figure 2 that shows the shaping functions under the three OOD distribution assumptions). *First, a piecewise linear function can fit any functions, as long as there are enough number of “pieces”. [1] already uses a 100-piece function ... What is the advantage … ?* **Response 4:** Note we are not suggesting that simply using a piecewise linear function as a shaping function is new or in itself is our contribution. The primary contributions of our work are the development of a theory for constructing OOD features, showing how the theory with specific distribution assumptions can explain SOA methods, and an example use of the theory by suggesting a specific family of shaping functions that can lead to better performance. There is similarity of our work with respect to the last contribution to [1] (and also other works e.g., ReAct, VRA) in the use of piecewise linear functions for shaping functions. Note that our contribution is not to simply propose the use of piecewise linear functions – as that is too general a class, and as the reviewer rightly says, can approximate any function. So just proposing the use of piecewise linear functions with many degrees of freedom (e.g., 100) would not in itself lead to any useful algorithm. The advantage of ours as compared to [1] is discussed in L81-108. Our piecewise linear form is not the same as what [1] results in – they both come from entirely different loss functions: [1] optimizes over shaping functions: $\min_{ \{ f \text{ such that } |f(z)/z|<K \} } -\| \text{ mean of f given ID } – \text{ mean of f given OOD} \|$, To simplify the optimization from an infinite dimensional function optimization to a finite dimensional one, the problem is discretized so that theta(z) = f(z)/z is restricted to piecewise linear functions. Our loss function is $\min_{ p(\tilde z|z) } -KL(p(\tilde z|z)) + IB(p(\tilde z|z))$ where in a simplified case we assume $p(\tilde{z}|z) \sim N(\mu(z),\sigma(z))$ where $\mu(z)$ is the mean of the shaping function. We perform the infinite dimensional optimization over *general* functions $\mu(z), \sigma(z)$. *Note that we are not restricting our shaping functions to be piecewise linear in our optimization,* while [1] does – so the class of shaping functions we optimize over is actually more general than the 100 piece (or any other number of pieces) functions in [1] that are optimized over. Considering Gaussian, Laplace, IG OOD distributions, we show over all hyper-parameters (alpha, beta, distribution parameters) that the optimized function $\mu(z)$ can be expressed approximately by a very specific piecewise linear function family shown in Fig 4 (not *any* piecewise function). The advantages of our approach over [1] are two-fold. First, in terms of theory, our approach explains the implicit assumptions in SOA methods. Second, experimentally, our piecewise family out-performs [1]. *Second, more importantly, OOD distributions are typically unknown in practice, ….* **Response 5:** We want to clarify again – we are not fitting OOD distributions with piecewise functions, rather we are approximating our optimized feature shaping functions with a *specific* class of piecewise linear functions. We agree that real world distributions may not fit the three example distributions – see the discussion for Reviewer 4N3p Response 2. We are not advocating these distributional assumptions and hence not seeking to fit *all* real distributions. Rather, the purpose of our work is to explain SOA methods, which have shown similar traits to our feature shaping functions under these three OOD distributions. So while our piecewise shaping functions may not approximate shaping functions resulting from *all* real-world distributions, it is more general than the implicit assumptions made in existing SOA methods, without making too general distributional assumptions, which may fall outside the class of real OOD distributions.
Summary: The paper introduces a variational information-theoretic framework for OOD detection. It models OOD features as random variables by optimizing a loss function that balances KL divergence for feature separability and Information Bottleneck (IB) regularization for compactness. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have browsed the proof but have not examined it line by line. Experimental Designs Or Analyses: Yes. Supplementary Material: I have browsed the supplementary materials but not carefully checked. Relation To Broader Scientific Literature: This work builds upon and extends previous research by enhancing OOD detection, a crucial aspect of ensuring the reliability of deep learning models in real-world applications. Essential References Not Discussed: Related work appears to be discussed. Other Strengths And Weaknesses: Strengths: The method presented in the paper improves upon a certain method (e.g. ReAct) by providing a theoretical framework rather than relying on heuristic rules like clipping activations. Unlike ReAct, which applies a fixed threshold to activations, the proposed approach learns an optimal OOD feature distribution using variational optimization. This allows the model to adapt dynamically to different OOD distributions, whereas ReAct may require manual tuning of the clipping threshold. The paper’s method explains why different feature-shaping techniques (like ReAct, ASH, VRA, and FS-OPT) work under specific assumptions, offering a more general and theoretically grounded solution. Empirically, it also achieves good results on benchmark datasets. Weaknesses: The method assumes some prior knowledge of the OOD distribution (e.g., Gaussian, Laplacian). How does it perform when the OOD distribution is completely unknown or highly complex? The proposed optimization involves solving an infinite-dimensional problem using variational calculus. How does this impact computational efficiency compared to simpler heuristic methods like ReAct? Other Comments Or Suggestions: minor typos: Section 3.1: "we make some simplifications to gain insights to our theory and approach." → Should be: "we make some simplifications to gain insights into our theory and approach."? Questions For Authors: Please see weaknesses part above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *Question 1: The method assumes some prior knowledge of the OOD distribution (e.g., Gaussian, Laplacian). How does it perform when the OOD distribution is completely unknown or highly complex?* **Response 1:** Our experiments show the performance of our piecewise linear family that encompasses Gaussian, Laplacian, Inverse Gaussian perform well across datasets, even when the OOD distributions do not closely fit some of these datasets (see Reviewer 4N3p Response 2). Our piecewise linear form makes more general distribution assumptions than SOA techniques as several SOA methods are related to our framework under one of the three distributions above, which is the reason for its out-performance of existing SOA. *Question 2: The proposed optimization involves solving an infinite-dimensional problem using variational calculus. How does this impact computational efficiency compared to simpler heuristic methods like ReAct?* **Response 2:** The inference cost is about the same as existing methods like ReAct. Since this optimization is done offline in training, the cost is not important. The complexity of optimization for 1D is $\mathcal O (NMK)$ where N is the samples of $p(z)$, M is the samples of $p(\tilde z|z)$ and $K$ is the number of gradient descent iterations. In our experiments with $N=M=200$, the algorithm took between 5-10mins on a standard desktop. *Question 3: minor typos: Section 3.1: "we make some simplifications to gain insights to our theory and approach." → Should be: "we make some simplifications to gain insights into our theory and approach."?* **Response 3:** Thanks; will do.
Summary: The paper presents a novel theoretical framework for constructing out-of-distribution (OOD) detection features in neural networks using a variational information-theoretic approach. The key contribution is a novel loss functional that consists of a KL divergence term that maximizes the separation between in-distribution (ID) and OOD feature distributions, and an information bottleneck (IB) term that favors compressed features that retain relevant OOD information while discarding unnecessary details. The loss functional is optimized using variational methods, leading to the derivation of OOD features as random variables. The mean of these random features corresponds to deterministic shaping functions used in existing OOD detection methods. By carrying large experiments, the authors show that their proposed piece-wise linear shaping function outperforms existing methods on standard OOD benchmarks, including ImageNet-1k and CIFAR datasets. The authors show that their method generalizes well across different architectures (e.g., ResNet, MobileNet, Vision Transformers) and datasets. --- ### Update after rebuttal I thank the authors for their responses. I will maintain my initial score. --- Claims And Evidence: While the paper presents a strong theoretical framework with insightful connections to existing methods, certain claims, however, lack empirical validation, particularly those related to: 1. OOD distribution assumptions: The proposed theoretical framework assumes specific statistical distributions for in-distribution (ID) and OOD data (Gaussian, Laplacian, and Inverse Gaussian). Real-world OOD data may not necessarily follow these distributions, and no empirical evidence is provided to justify these assumptions. The method's generalization to arbitrary OOD distributions is unclear. 2. Superiority of random features over deterministic ones: The paper claims that random features (rather than deterministic shaping functions) are more optimal for OOD detection. However, this is only based on the assumed specific statistical distributions where the choice of the parameters seems handpicked and the dimensionality of the probability space reduced to 1D. 3. Necessity of the Information Bottleneck term: The IB term is used for regularization, ensuring that features retain only the necessary OOD information while being compressed. However, the necessity of the IB term is not experimentally tested. Put differently, we don't know how good the loss functional would be without the IB term. An ablation study could help determine the effectiveness of the loss. 4. Scalability to high-dimensional data: The optimization problem is inherently infinite-dimensional, requiring numerical approximation techniques. While they propose a computationally feasible method in 1D, the complexity of extending it to high-dimensional feature spaces is not addressed. Unfortunately, this is discussed extensively in the paper, even the complexity of the approach in a 1D setting is not discussed. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria mostly make sense for the problem of out-of-distribution (OOD) detection. The models and datasets they considered are widely used in the OOD literature as benchmarks. Theoretical Claims: I didn't check in depth the proofs of the theoretical claims. In fact, I mostly focused on understanding the proposed loss, the variational framework, and its effectiveness in detecting OOD samples. Experimental Designs Or Analyses: Yes, I examined the experimental design and analyses focusing on methodology soundness, validity of comparisons, and potential issues. Below is a structured review. Strengths: The authors compare their method to several state-of-the-art (SoA) OOD detection approaches, including: feature shaping methods (e.g., ReAct, FS-OPT, VRA-P, ASH), Softmax-based methods (e.g., MSP, ODIN), and energy-based methods (e.g., Energy, DICE). This makes their results contextualized within the broader OOD detection field. Their experiments were conducted on widely accepted datasets. They considered ImageNet-1k as the in-distribution (ID) dataset, and Species, iNaturalist, SUN, Places, etc. as the OOD datasets. They considered CIFAR-10/100 as ID datasets with common OOD datasets such as TinyImageNet, SVHN, Texture, Places365, LSUN-Cropped, LSUN-Resized, iSUN, CIFAR100/10. All these datasets seem appropriate for natural image distribution shifts, making them relevant for evaluating OOD detection. The paper evaluates many widely accepted neural networks models using metrics such False Positive Rate at 95% True Positive (FPR95) for OOD detection, and AUROC to measure overall separability between ID and OOD samples. These are standard and valid metrics for OOD detection performance evaluation. Supplementary Material: I mostly skimmed over the appendices due to limited time. Relation To Broader Scientific Literature: OOD detection is a widely studied problem in machine learning, and various techniques have been proposed, including confidence-based, energy-based, distance-based, and feature-shaping methods. The paper's contributions build upon these methods while introducing a variational formulation using information theory. With respect to Confidence-based OOD Detection methods, the paper moves away from softmax-based methods, which rely heavily on the output space rather than feature space. The authors argue that feature-based approaches offer better generalization than confidence-based methods. For Energy-based OOD Detection methods, the variational loss function in this paper can be seen as an extension of energy-based approaches, since KL divergence maximization between ID and OOD features implicitly encourages separation. However, the proposed method focuses on shaping feature distributions rather than directly modifying the energy function. With respect to Distance-based OOD Detection methods, the proposed method does not rely on distance metrics, but instead optimizes a variational loss functional to encourage separation between ID and OOD distributions. However, the KL divergence term serves a similar function to distance-based methods by ensuring OOD features are statistically different from ID features. For Information-Theoretic OOD Detection methods, the proposed loss explicitly incorporates IB regularization into the OOD feature optimization process. The authors justify several existing OOD shaping methods (e.g., ReAct, FS-OPT) as special cases under their framework. Essential References Not Discussed: The paper introduces a variational formulation for OOD feature extraction but fails to cite prior works that have applied variational or information-theoretic techniques to OOD detection. There are existing variational formulations for OOD detection that should be referenced for proper context. [1] proposes an information-theoretic approach for OOD detection by using likelihood ratios instead of raw likelihoods. [1] is similar to this paper in the sense that it formulates OOD detection from an information-theoretic perspective. The likelihood ratio idea relates to KL divergence-based feature separation in the current work. It would be useful to know how the KL divergence optimization proposed in this paper compares to likelihood-ratio-based OOD detection. [2] Examines the limitations of information-theoretic methods like normalizing flows for OOD detection. Since the proposed method is information-theoretic, the authors should discuss known pitfalls from this prior work. For instance, does the variational approach in this paper avoid the same failure modes? [3] Investigates how pre-trained models learn OOD features, showing that deeper layers contain more useful information. This work suggests that OOD detection improves when using deep feature representations. The current paper should cite this when justifying why it focuses on feature-based OOD detection. References: [1] Ren et al. (2019) – "Likelihood Ratios for Out-of-Distribution Detection" (NeurIPS 2019) [2] Kirichenko et al. (2020) – "Why Normalizing Flows Fail for Out-of-Distribution Detection" (NeurIPS 2020) [3] Fort et al. (2021) – "Exploring the Limits of Out-of-Distribution Detection" (NeurIPS 2021) Other Strengths And Weaknesses: Overall, the paper is well-written and insightful. Providing answers to the questions I asked above would bringing more clarity about the key contributions of the paper. Other Comments Or Suggestions: None. Questions For Authors: Key questions: 1. How does the variational optimization framework compare to likelihood ratio-based OOD detection (e.g., Ren et al., NeurIPS 2019)? This method maximizes KL divergence to separate ID and OOD feature distributions. Likelihood ratio-based OOD detection also exploits divergence between ID and OOD distributions but does so directly in a Bayesian framework. Understanding the difference in theoretical properties would clarify whether this approach is more generalizable or computationally efficient than likelihood-based OOD methods. 2. What specific assumptions about OOD distributions are necessary for the proposed feature shaping functions to be optimal? This method assumes Gaussian, Laplacian, or Inverse Gaussian distributions for OOD features. However, real-world OOD data may not follow these assumptions. It is unclear how the method would generalize to cases where OOD features do not fit these distributions. 3. Why didn't the authors include an ablation study comparing performance with and without the Information Bottleneck (IB) term? The IB term is a key part of the proposed framework, but there is no direct evidence that it improves results. Without an ablation study, it remains unclear whether IB contributes meaningfully or if similar results could be achieved without it. 4. How were the hyper-parameters (e.g., IB weight $\alpha$) selected, and were they tuned consistently across all datasets? 5. Since the use case that was presented in the paper focuses on 1D data, I am curious to know how well this method scales to high-dimensional data (e.g., Vision Transformers, large-scale multi-modal datasets)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Key Question 1 **Response 1**: Thanks for the ref, we’ll cite. We agree with the use of likelihood ratio (LR) as a score over the likelihood. However, our work focuses on feature shaping not scoring (L60-61, 2nd column). Our work separates the distributions of the resulting OOD feature shape distributions under ID/OOD through the KL divergence (& IB) to derive optimal shaping features, while [1] uses the LR of the original neural net feature as a score. So while the LR relates to the KL div. (as the expectation of the LR), [1] defines a score, whereas our approach determines feature shape. Key Question 2 **Response 2:** Our theoretical framework does not assume specific distributions: it is applicable to any ID/OOD distribution - no assumptions are needed for optimality. To concretely illustrate the theory, example distributions are explored. Section 4 explains our choice of distributions: Gaussian is common in probabilistic analysis; Laplace models heavy-tailed outliers typical of OOD; and Inverse Gaussian serves as a reasonable assumption when no prior knowledge is known - large when ID distribution is low and vice versa. These examples reveal that current SOA heuristics may implicitly assume properties of these distributions. Having said that, our examples have similarities to distributions consistent with OOD data in the real-world. VRA (Xu et al.) plots distributions of various OOD data in the ImageNet benchmark – OOD qualitatively have heavier tails and sharper peaks than ID data. Gaussian ID / Laplace OOD choice match these properties. In e.g. VIT-B-16, the distributions ID/OOD appear Gaussian – see https://drive.google.com/file/d/1XANDdWfXdy4MONY0MnQfaVOBoVNy7r0a/view?usp=sharing In practice, OOD data will change based on application so it isn’t desirable to match too closely to any real-world dataset otherwise it would not generalize. Key Question 3 **Response 3:** Ablations are not shown as, without IB, the loss is ill-posed (L114-116, 2nd column) - the optimization does not converge since distributions can be arbitrarily separated, increasing the KL term indefinitely. Appendix A explains this (L153-159, 2nd column) with a case that can be analyzed analytically - linear mean feature with Gaussian ID/OOD distributions. Without IB, the slope becomes unbounded. In the more general non-linear case, which cannot be solved analytically, the optimization also diverges without IB. This is consistent with literature – FS-Opt and VRA that separate distributions (though through mean separation) impose constraints by effectively restricting the slope of the feature shaping function. While restricting the slope does produce a well-posed problem, it may not be addressing the underlying cause of the ill-posed-ness. IB is more naturally suited to the OOD problem – retaining OOD information while compressing the feature. Key Question 4 **Response 4:** See Section 5 Lines 342-350 (2nd column). Yes, consistently chosen over all datasets. Key Question 5 **Response 5:** Our method has been demonstrated on high-dim data - the shaping functions are applied on each component of the penultimate layer feature (Sec. 5), as SOA does. We experimented with two vision transformers (Table 1: ViT-B-16 and ViT-L-16). *Superiority of random features over deterministic …* **Response 6:** We formulated general random features; a question is whether they result in a better loss. The answer is yes - maybe not for all cases, but all cases that we examined (all the distributions and hyperparameters in all the plots shown have non-zero standard deviation, we didn’t show them because of space and didn’t find it insightful). We will clarify the statement in the paper – that our intent is not to claim random features are always more optimal, just that there is reason to consider them. *Scalability to high-dimensional data …* **Response 7:** Our current methodology applies to high-dim data with independence assumptions (L185-190) as SOA. Generalization of the independence assumptions is challenging as numerical methods presented won't scale - grid representations of multi-d probabilities are infeasible. While we have preliminary ideas to address this, it is future work. The current paper's goal is to establish foundational theory and demonstrate its utility. Despite focusing on 1D shaping functions, consistent with SOA, our theory successfully explains these methods and predicts new shaping functions. This underscores the promise of our approach and lays groundwork for future exploration. See Reviewer Ghj6 Response 2 for complexity. *[2] limitations of information-theoretic methods like normalizing flows* **Response 8:**[2] shows why normalizing flows that produce exact likelihoods do not work well as a score for OOD detection. [2] relates to the scoring function, not feature shaping – which is the focus of our work. We’ll cite [2] in our discussion on scores. *[3]…should cite* **Response 9:** Thanks; will do. --- Rebuttal Comment 1.1: Comment: I thank the authors for their efforts. Their responses seem convincing enough. I will maintain my score.
null
null
null
null
null
null
Ergodic Generative Flows
Accept (poster)
Summary: The paper studies the problem of generative modelling and sampling (a bit confusingly referred to in the paper as IL and RL). The authors focus on the framework on GFlowNets in continuous spaces and learning directly from samples. The paper presents an alternate theoretical framework of Ergodic Generative Flows thats redefines the transition mechanism using a finite set of globally defined diffeomorphisms—smooth, bijective transformations with smooth inverses—that exhibit ergodic properties. Ergodicity ensures that iterative application of these transformations enables comprehensive exploration of the state space, allowing EGFs to approximate any point within it over time. This framing results in the flow matching objective requiring only a discrete sum over the finite set of diffeomorphisms instead of an integral. The paper establishes theoretically that EGFs can approximate any continuous distribution on compact manifolds such as tori and spheres under some assumptions. The paper also introduces KL-weakFM, which integrates a Kullback-Leibler (KL) divergence-based cross-entropy term with a relaxed version of the flow-matching condition, allowing learning directly from samples. The paper also includes some experimental results on low-dimensional problems, with EGFs showing strong performance. ### Update after rebuttal The authors provided some clarifications to my some of my questions. However, I do not agree with some of the claims made in the rebuttal (about calling what the method is doing imitation learning over generative modeling and TB not being a "pure GFN" and thus not a valid baseline). The authors also did include a baseline with a learned density which is discussed in the paper. Regardless of these, I think the paper makes interesting contributions so I keep my positive rating. Claims And Evidence: C1: Extending the theory of GFNs with quantitative results about sampling in the non-acyclic case. To the best of my understanding, the results in TB (Malkin et al. 2022) are applicable with a weaker assumption of termination w.p. 1 instead of acyclicity. Aside from Brunswic et al. 2024 which is discussed briefly in the paper, the result in Theorem 3.8 provides a novel quantitative result for GFNs in non-acyclic domains, providing a bound on the total variation. Although I note that the bound is not tested numerically. C2: Develop a theory of EGFs which induces a tractable flow matching loss In Section 3, the authors introduce EGFs which provide an alternate parameterization for policies in terms of a finite set of diffeomorphisms, and establish universality in Theorem 3.4, Theorem 3.5 and Theorem 3.6. The authors also discuss the dependency of the result on the $L^2$ mixing summability property in the Limitations. C3: Propose a coupled KL-weakFM loss to train EGFs for generative modeling In Section 3.3, the authors propose and discuss the KL-weakFM loss and validate it numerically in experiments in Section 4 in the generative modeling setting. This is the claim with the weakest evidence as I will elaborate further below. In particular, the experimental choices could be improved significantly. Methods And Evaluation Criteria: * A shortcoming already discussed in the paper is that the empirical evaluations are limited to low dimensional problems. Theoretical Claims: The key contributions of the paper are theoretical in nature. I have read through all the results and looked at the proofs for Theorem 3.4, 3.5, 3.6 and 3.8. I have tried my best to check the correctness but it Is possible that I might have misunderstood some aspects. Experimental Designs Or Analyses: * Additionally I find the baselines in the experiments lacking. For instance, in the RL case there is no comparison with other tractable GFN losses (for continuous spaces) like trajectory balance. * While the authors talk extensively about the shortcomings of learned diversity, the IL experiments have there is no comparison with a learned density baseline. Supplementary Material: I looked at the proofs, as well as some additional results in the Appendix. Relation To Broader Scientific Literature: The contributions of the paper add to the literature on GFlowNets, In particular the quantitative bound in Theorem 3.8 is useful addition to the broader literature on GFlowNets. Essential References Not Discussed: In the context of tractable continuous GFlowNets, Sendera et al. 2024 leverages the view of GFNs are diffusion models to enable scaling to high dimensional problems and would be useful to discuss here. Other Strengths And Weaknesses: Strengths: * I find the idea of replacing policies with a finite set of diffeomorphisms quite novel and interesting, as it opens up new avenues to explore. Weaknesses * (Minor) The paper focuses quite heavily on the intractability of flow matching in continuous spaces and equates it as a weakness of GFlowNets in general, even though there are tractable objectives like TB. I think the framing could be improved in that regards. Other Comments Or Suggestions: * (Minor) As the paper is mainly theoretical, the writing can be a bit dense at times. Some rewording, especially in Section 3 to make the results a bit easier to follow would be helpful. * (Minor) What the authors call imitation learning is really just a problem of generative modelling, and I find calling it IL a bit confusing. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear HCTd, We thank you for your detailed review. Before going into detail, allow us to emphasize that the present submission is an "Exercise in Style": how far can we go with "pure" GFN without a separate energy model? We understand it may be restrictive, but this is the game we decided to play. 1. You are right to point to Sendera et al., 2024, which deals with GFN generalization of diffusion models. To the best of my knowledge, Sendera et al. are restricted to reward and energy model objectives. As such, we should cite them together with Zhang et al., 2022. 2. Experimental Designs Or Analyses (1): We agree that DB or TB losses comparison is important, we made the initial choice to restrict ourselves to FM-losses for consistency. Also, the main purpose of the RL section is to check the basics of EGF in a standard GFN setting and confirm the 0-flow problem noted by Brunswic et al. DB or TB losses require changes in our implementation, but we will try to incorporate stability results by the end of the rebuttal discussion. 3. Experimental Designs Or Analyses (2): We fail to understand the comment as Figure 2 provides such a comparison. 4. Tractability of TB. Since TB objective does not enforce flow-matching property (there is no flow, only policies) it's not a "pure" GFN. Furthermore, to the best of our knowledge, TB requires a separate energy model since it has no flow. On the other hand, DB may be in the scope, but the expectancy of the DB loss is the same as the expectancy of the FM loss if the backward policy is chosen according to formulas (6-7), except that DB has a higher variance. 5. The other reviewers also made several suggestions and remarks regarding rewording. Please see our answers to the other reviews for the intended changes. 6. We wanted to emphasize the distinction between "target distribution with density" and "target distribution with samples." We felt that generative modeling in AI covers many practices that intersect both worlds (the work of Sendera et al., for instance), and we struggled to find the right terminology. Although imperfect choices, "Imitation Learning" carries the idea of "examples" while "Reinforcement Learning" is strongly associated with "reward" in the community. We cannot change this choice for the present submission, but we are open to suggestions for terminology that you may provide. --- Rebuttal Comment 1.1: Comment: Thanks for the response! > Experimental Designs Or Analyses (2) I think there is a typo in my review it should say learned "density" instead of learned diversity. My apologies! Perhaps I should clarify my question: The discussion in the paper seems to say instead of learning an unnormalized energy and then training a GFN to sample from it, EGFs can learn the sampler directly. So a simple baseline would be learning the _unnormalized_ energy using samples and then training a GFN on it, which as I understand is not the case in Figure 2. > Since TB objective does not enforce flow-matching property (there is no flow, only policies) it's not a "pure" GFN. I am not sure I follow the argument here. The policies learned in TB are defined using the flows, and the flows can be recovered (in principle) using the policies and $Z$. Proposition 1 in the TB paper shows that optimizing the TB objective achieves the flow matching condition. So I am not sure what you mean by "pure" GFN. > We felt that generative modeling in AI covers many practices that intersect both worlds (the work of Sendera et al., for instance) I think "generative models" is typically used to define approximating $p_\theta(x)$ (that can be sampled) given only access to samples $x_i\sim p(x)$. Sendera et al. study the sampling problem where you have access to an unnormalized energy and want to sample from the distribution. So I am not sure why this terminology is a problem. --- Reply to Comment 1.1.1: Comment: 1) As we mentioned in our first reply, our intention was purity, hence the will to train GFN with its flow $F(\cdot \rightarrow s_f)$ as reward model. This was also suggested in GFN foundations https://arxiv.org/pdf/2111.09266 Section 4.5 paragraph 3. However, you are right to point out that we should make the comparison. Our intuition was that beyond purity, the interest only reside two major points and two secondary ones: - (Major) the GFN induced reward model is *normalised* by construction if the WeakFM part of the loss is zero. This is a consequence of the equation (17) on page 6. In general, such guarantees are hard to ensure for general proxy reward models and results in more difficult crossentropy training of the reward model. Our main drawback is that we have to deal with a small background reward due to positive flow matching error bias induced by strong WeakFM loss, hence the background filter we use. - (Major) GFN internal reward model come with the guarantee of no mode collapse under zero WeakFM loss. Such a guarantee is more difficult to ensure for external reward models as one has to guarantee full support of the training distribution of the GFN which rely on the policy does not "focusing on modes too quickly" as in adaptive MCMC. See for instance https://link.springer.com/article/10.1007/s11222-008-9110-y . In our case, Ergodicity already ensures full support, so long trajectories should be sufficient to ensure that the estimator of the WeakFM loss is reliable. - (minor) bagging: having common features for reward on policies. But this may be done in a more straightforward manner by using a common backbone model. - (minor) Theorem 3.8 and corollaries are slightly more straightforward without an separate reward model. Therefore, our interest is mainly qualitative and theoretical. Nomalization is very easy to check but testing mode collapse would probably require to build an a fail case of EBM-GFN with mode collapse. Fairness of such an comparison requires careful experimental design. We cannot promise to do that within the limited time separating us from the end of the current discussion. 2) Regarding TB, it may be a personal opinion but TB is to GFN what DPO is to PPO: the flow is implicit. Worse, it is intractable in many cases, especially ours, as its computation would require to integrate over all possible trajectories passing through a given point. Yes TB guarantees come from flow reasoning, but this is insufficient to call it a GFN. In the same way, DDPM is a GFN with the density at time $t$ identified with the state flow, but we can agree that sincd only the gradient of the log density is known (ie its mean policy) diffusion is only a GFN from a Theoretical view point that serves only a pedagogical purpose. Finally, correct us if we're wrong but, if a GFN was parameterized with a forward policy and a star outflow as we do, the backward policy chosen as we do using a detailed balance condition, then the TB loss would not guarantee flow matching condition in the sense of equation (2) in our work, one would have $\overline F:=\sum_t F_{init}\pi_\rightarrow ^t \neq F_\rightarrow $ on $\mathcal S$ and possibly $\overline F \otimes \pi_\rightarrow (\cdot\rightarrow s_f)=R\neq F_\rightarrow \otimes \pi_\rightarrow (\cdot\rightarrow s_f)$. So sampling objective is fulfilled but not the fow matching condition. 3) We are quibbling on terminology: EMB-TB training is akin to LLM-SFT. It is definitely generative modeling but the method is two stages and Reward learning based. If we do not do this distinction, the whole "separate energy model" against "internal energy model" discussion becomes more confusing. We may defend our viewpoint choices but we have to agree that our terminology choice was not the best. 4) we have done the NASA flood and earthquake experiments. Here are the NLL scores Earthquake: -0.12 for EGF on par with Moser flow -0.09 Flood: 0.56 for EGF on par with Moser flow at 0.62
Summary: This paper introduces Ergodic Generative Flows (EGFs), a novel framework that extends Generative Flow Networks (GFNs) to address key challenges in training generative models for both reinforcement learning (RL) and imitation learning (IL). The authors identify four main challenges with existing GFNs: intractability of flow-matching loss, limited tests of non-acyclic training, the need for a separate reward model in imitation learning, and challenges in continuous settings. The key innovations of this work are: (1) leveraging ergodicity to build generative flows with finitely many globally defined transformations (diffeomorphisms), providing both universality guarantees and tractable flow-matching loss; (2) introducing a new KL-weakFM loss that couples cross-entropy with weak flow-matching control, enabling IL training without a separate reward model; and (3) developing a mathematical framework for EGFs that ensures expressivity even with simple transformations like translations on tori and rotations on spheres. The authors empirically validate their approach on toy 2D tasks and real-world datasets from NASA on the sphere using the KL-weakFM loss. Additionally, they conduct toy 2D reinforcement learning experiments with a target reward using the flow-matching (FM) loss. Their results demonstrate that EGFs can effectively address the identified challenges and outperform baseline methods, particularly in settings where computational and time budgets for generation are highly constrained. Claims And Evidence: Most claims in the paper appear to be well-supported by theoretical analysis and empirical evidence. The authors provide a comprehensive mathematical framework that logically connects EGFs to existing generative modeling approaches. However, some claims could benefit from additional clarification or support: - The claim in Theorem 3.4 regarding the expressivity of EGFs lacks specificity about what "expressive enough" means in terms of parameterization of $f^*_{\rightarrow}$. While the theorem states that EGFs can approximate any smooth density, the conditions under which this holds true could be more precisely defined. - The experimental results in Section 4.1 demonstrate the stability of EGFs with regularization, but the paper doesn't thoroughly explore why regularization has this stabilizing effect—whether it's simply controlling flow size or inducing more meaningful behavioral changes. Methods And Evaluation Criteria: The methods proposed in the paper are appropriate for addressing the challenges in training GFNs. The authors' formulation of EGFs using a finite set of diffeomorphisms with topological ergodicity provides a mathematically sound foundation that enables tractable flow matching while maintaining expressivity. The introduction of the KL-weakFM loss is particularly noteworthy as it eliminates the need for a separate reward model in imitation learning settings, addressing one of the key challenges identified. By coupling cross-entropy with weak flow-matching control, this approach offers a more direct path to training generative models from demonstration data. The evaluation criteria used in the experiments are sensible. For the toy 2D tasks, the authors assess the expressivity and stability of EGFs, which are important properties for generative models. For the NASA datasets on the sphere, they use the KL-weakFM loss and evaluate sample quality, demonstrating the real-world applicability of their approach. The additional reinforcement learning experiments with a target reward further validate the flexibility of the framework. However, the evaluation could be strengthened by: - Including direct comparisons to standard generative models like diffusion models in computationally constrained settings, which would better highlight the claimed advantages of EGFs. - Providing more comprehensive metrics such as training time, sampling efficiency, and robustness to hyperparameter choices. More extensive testing on higher-dimensional problems to demonstrate scalability beyond 2D and spherical domains. Theoretical Claims: The paper presents several theoretical results, including: - Theorem 3.4 on the expressivity of EGFs - Definition 3.3 of $L^2$-exponential ergodicity - Theorem 3.8 and Corollary 3.9 on sampling error bounds The proofs are in the appendix, which should be explicitly mentioned in the main text. While the core ideas behind these results seem sound, Definition 3.3 could be explained more clearly. As I understand it, the concept of $L^2$-exponential ergodicity ensures that the Markov kernel decorrelates functions over time, meaning the influence of initial conditions diminishes rapidly in an $L^2$-sense. The formulation of Theorem 3.8, which provides bounds on sampling error in terms of total variation distance, is valuable but would be strengthened if expressed for acyclic discrete GFlowNets, for example, using common notation in the field, which would make the connection to existing work more explicit. Experimental Designs Or Analyses: The experimental designs in the paper are generally sound. The authors test their approach on both synthetic and real-world data, which helps validate the method's applicability. In Section 4.1, the authors test EGFs on a checkerboard distribution, which is a good choice for evaluating expressivity. However, the analysis of why regularization stabilizes training could be more thorough. For the experiment in Section 4.2 on the volcano dataset, the authors appropriately use the KL-weakFM loss from Algorithm 1 and provide comparisons to baselines. The positive results here are compelling evidence for the effectiveness of EGFs. One potential issue is that Algorithm 1 uses importance weighting terms involving ratios of densities $f^*_{\rightarrow}$ and $f^*_{\leftarrow}$. If these densities are poorly estimated or close to zero in some regions, the weights could become unstable. It would be valuable to know if the authors encountered this issue in their experiments and, if so, how they addressed it. Supplementary Material: I have read the proofs of the main theorems, and quickly went through the extra experiments. Relation To Broader Scientific Literature: The paper builds upon and extends several strands of research in generative modeling: - It draws connections to normalizing flows, which model distributions through invertible transformations. - It relates to diffusion models, which generate samples by reversing a diffusion process. - It extends the framework of GFlowNets, which were initially designed for generative reinforcement learning. The paper contributes to unifying these disparate approaches under a common theoretical framework, which is valuable for advancing the field's understanding of generative modeling. However, the authors should acknowledge that Zhang et al. (2022) in "Unifying generative models with GFlowNets and beyond" already attempted to extend GFlowNets to imitation learning using the Trajectory Balance Consistency objective, achieving positive results. This prior work is relevant to understanding the context of the current paper's contributions. Essential References Not Discussed: Besides the Zhang et al. (2022) paper mentioned above, which should be cited given its relevance to extending GFlowNets to imitation learning, I don't have specific knowledge of other missing essential references. Other Strengths And Weaknesses: Strengths: - The paper presents a compelling unification of different generative modeling approaches, which is valuable for advancing theoretical understanding in the field. - The introduction tells a coherent and gradual story, building from normalizing flows to diffusion models to GFlowNets, which helps contextualize the work. - The proposed Algorithm 1 elegantly trains an ergodic generative flow using imitation learning with a balanced loss function. Weaknesses: - The presentation of some mathematical concepts is dense and could be more accessible. For example, Equation 7 appears to stem from the detailed balance condition and conservation of probability, but these connections aren't explicitly stated. Most theorems and definitions would benefit from a more gentle introduction for the typical ICML paper reader. - The background section on GFlowNets could be clearer, particularly in explaining how the stopping probability $\frac{dF_{\text{term}}}{d(F_{\text{term}} + F^*_\rightarrow)}(s_t)$ ensures that the Markov chain naturally stops in regions that match the terminal distribution (in the case of discrete acyclic GFlowNets). - The paper would benefit from a more thorough comparison to existing generative models in terms of sample quality, training time, and robustness. Other Comments Or Suggestions: - The background section should more clearly explain the stopping probability in GFlowNets and how it relates to the terminal distribution. Specifically, it should clarify that when $F_{\text{term}}$ has a lot of mass, the process is likely to stop, and when $F^*_{\rightarrow}$ dominates, it is likely to continue. - The derivation of Equation 7 should explicitly mention its connection to the detailed balance condition ($\text{Backward flow density} \times \text{Backward transition probability} = \text{Forward flow density} \times \text{Forward transition probability}$) and conservation of probability. - Definition 3.3 could benefit from a more intuitive explanation, perhaps using the interpretation that it ensures the Markov kernel decorrelates functions over time. - The authors should explicitly mention that proofs are in the appendix. Questions For Authors: - In Theorem 3.4, could you clarify what "expressive enough" means in terms of the parameterization of $f^*_{\rightarrow}$? What specific conditions must be met for EGFs to approximate arbitrary smooth densities? - In Algorithm 1, have you encountered issues with importance weighting terms becoming unstable when densities $f^*_{\rightarrow}$ and $f^*_{\leftarrow}$ are poorly estimated or close to zero? If so, how did you address this challenge? - Could you elaborate on why regularization stabilizes training in the experiments in Section 4.1? Is it merely controlling the flow size, or does it induce more meaningful changes in model behavior? - Have you compared EGFs and standard generative models like normalizing flows and diffusion models? How do EGFs compare in terms of sample quality, training time, or robustness? I'd be happy to raise my score if my questions are addressed and if the authors commit to improving the clarity and readability of their (great) paper, by providing a reasonable plan. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear mQ7G, We thank you for your particularly detailed review. To begin with, we will add clear references to proofs in the appendix. 3. Theoretical Claims: - Your interpretation of Definition 3.3 is correct. We may move the paragraph after Theorem 3.4 to after Definition 3.3 and expand it to better explain this definition by referring to the mixing properties of the Markov kernel. We agree that understanding Theorem 3.8 would benefit from a specialization on graphs. Your suggestion is in line with our answer to reviewer 6zNQ regarding neural network approximations. We shall insert a paragraph answering both points in the RL setting. 5. Scientific Literature: - We apologize for not citing Zhang et al. We will cite it properly. 6. Strengths And Weaknesses: - Equation 7, you are right; this equation should be introduced more gently. It may be derived from $\pi^*_\leftarrow (x\rightarrow A) = \frac{d (F_{\rightarrow}^*\otimes \pi^*_\rightarrow)(A\rightarrow \cdot)}{d F_{\leftarrow}^*}(x)$ taken from section 3.2 of Brunswic et al. https://arxiv.org/pdf/2312.15246. This equation indeed formalizes a detailed balance condition in the measurable setting. Also, Proposition 1 in the appendix clarifies the link between bi-measures and Markov kernels. We will add a sentence to account for this interpretation and add proof in the appendix. More generally, we will enhance the introduction of Theorems of definitions with comments on their intuition. 7. Questions: - The main point of Theorem 3.4 is that EGFs reduce the generative problem of finding a scalar field on the state space. Theorems 3.5 and 3.6 provide sufficient conditions for L2-exponential mixing, then an L2 error on $f_{\rightarrow}^*$ translates into an increase in $\delta$ in Corollary 3.9 and an increase of the TV, which is easy to relate to the L1 error of $f_{\rightarrow}^*$. We agree that the "expressive enough" formulation is vague and requires a comment. - Regarding Algorithm 1 and importance densities. The densities were used as is for the results presented in the submission. In later experiments, we added a small constant (1e-2) to stabilize training; however, no ablation studies were conducted to assess its relevance or effectiveness. Flow collapse was the most common problem encountered but seemed more related to too strong a regularization than to collapsing densities. A lower bound on $\int f_{init}$ proved sufficient to force exploration. - Your comment on the regularization echoes that of moVi. The regularization is inspired by Brunswic et al. regularization; their Theorem states that a decaying regularization may be used to enforce convergence toward an acyclic flow on graphs when using a stable FM loss. From an optimization viewpoint, the regularization pushes the derivative of the loss to be positive in the direction of any 0-flow, hence leading to a stable loss in the sense of definition 3 of Brunswic et al. In our setting, any 0-flow component is expected to vanish (but the theorem mentioned above does not apply to our setting due to their enumerable state space assumption). A side effect is reducing the sampling length. We will enhance the RL section with a discussion of this regularization. - We conducted comparisons to diffusion models (DDM) and normalizing flows (FFJORD), but we felt that the comparison to Moser flow already carried our point. We will add qualitative comparisons in the appendix on 2D toys on tiny 32x2 models. On the NASA datasets, we already include NLL comparisons to baselines for volcanoes; we will add the results on earthquakes and floods, also the results for Riemannian diffusion, as well as a comment on the relative size of each baseline. - The experiments presented in the paper were solved with manual hyperparameter tuning; the only caveats were: - too high regularization scale leads to flow collapse - too high WeakFM scale tends to make learning the optimum harder especially for small models. So far, we have only trained EGFs up to dimension 20 on distributions (toy and sparse reward Markov decision tasks). However, we prefer to limit the scope of the present work to proof of concept as hyperparameter tuning proved less straightforward in higher dimensions. - Regarding training time, we did not benchmark the training speed. We shall include a comparison table in the appendix. Our wall clock evaluation led us to conclude that our training for NASA datasets was faster than that of Moser Flow by several orders of magnitude. Diffusion models tend to train slightly faster than EGFs on 2D toys, while FFJORD is significantly slower. However, hyperparameters have been shown to impact convergence speed significantly. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed reply and your eagerness to improve the clarity of the paper. Upon reading your answer, and other reviews, I am raising my score. --- Reply to Comment 1.1.1: Comment: We thank you.
Summary: This work introduces Ergodic Generative Flows (EGFs) to tackle several issues not satisfactorily resolved in generative flow networks by using ergodicity to create simple generative flows with globally defined transformations and tractable flow-matching loss. Furthermore, a new KL-weakFM loss is proposed for IL training without a separate reward model. The effectiveness of IL-EGFs is evaluated on toy 2D tasks and real-world NASA datasets, while toy 2D reinforcement learning experiments are conducted using the FM loss. Claims And Evidence: This paper is well-written and presents rigorous theoretical results for a class of generative models. The authors provide clear and convincing evidence to support their findings. Methods And Evaluation Criteria: This paper primarily focuses on theory and methodology, offering substantial theoretical insights and proposing new methods. However, the experimental evaluation of the proposed method is somewhat limited, with only a few experiments conducted to assess its effectiveness. Expanding the experimental section with more extensive tests could further validate the practical applicability and performance of the method. Theoretical Claims: I have gone over the theoretical results and proofs presented in the paper, and they appear to be correct. The logical flow and mathematical rigor of the arguments are sound, and the conclusions drawn from the proofs are well-supported. Experimental Designs Or Analyses: The experiments conducted to assess the effectiveness of the proposed method are limited to two-dimensional datasets. While these experiments provide initial insights into the method's performance, evaluating it on higher-dimensional datasets would offer a better understanding of the method. Supplementary Material: I have reviewed the entire supplementary material. Relation To Broader Scientific Literature: Generative Flow Networks (GFNs) were initially developed for sampling from unnormalized distribution densities on directed non-acyclic graphs. While recent advancements have expanded their theoretical framework, challenges persist in training GFNs in continuous settings and for imitation learning (IL), such as intractable flow-matching loss and the need for a separate reward model. The key contributions of this paper is that it addresses these issues and proposed a new approach. Essential References Not Discussed: It appears that the essential related works are adequately cited in this paper. Other Strengths And Weaknesses: No additional comments here. Other Comments Or Suggestions: No other comments. Questions For Authors: I have the following questions for the authors: 1. In the standard generative learning scenario, where only a random sample of size n is available, do Theorem 3.8 and Corollary 3.9 provide any insights into how the error bounds are influenced by the sample size? 2. How do the error bounds in Theorem 3.8 and Corollary 3.9 relate to the dimensionality of the data? Do these results suffer from the curse of dimensionality? 3. Which functions are approximated by neural networks in your framework, and how are the approximation errors addressed in Theorem 3.8? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear 6zNQ, We thank you for your detailed review. We acknowledge the need for higher dimensional experiments, it is part of an ongoing project to scaling up EGFs as well as training conditioned EGFs. Regarding your questions: 1. Unfortunately, they do not directly. A straightforward strategy would be to use Wasserstein bounds together with dequantization. - By Theorem 3.8 and its corollaries, the KL-WeakFM loss controls the TV distance between the EGF sampling distribution and the target dequantized empirical distribution. - On compact domains W < diameter * TV, the loss then controls the Wasserstein distance between EGF sampling distribution and dequantized empirical target distribution. - Finally, use triangular inequality in W distance for the path "dequantized empirical target distribution -> empirical target distribution -> true target distribution." Bounds such as https://arxiv.org/pdf/1707.00087 to estimate the Wasserstein distance of the empirical distance to the target distribution are leveraged at this step. Such a computation is straightforward and may be added in appendix. The unbounded case requires extra care and assumption for the second step. 2. Regarding the curse of dimensionality, - to begin with, we refer to the answer given to reviewer moVi about the control of the sampling time (short answer yes in a tame way). - more to the point, the curse of dimensionality is likely to arise in the estimation of the weakFM loss since we need to enforce a flow-matching condition. We expect the spectral gap to reduce the asymptotic theoretical complexity of the Monte Carlo estimator compared to naive translation moves (non-ergodic), but more theoretical work is needed. A more sophisticated replay buffer is also probably required. We note this question for future theoretical investigations. 3. On neural networks approximations - Neural networks approximate: - the star outflow $f_\rightarrow^*$, - the policy softmax part $\alpha_\rightarrow^i$. - the translation part of the affine transforms on tori (but with no inputs since they do not depend on the state) - The answer to your question then depends on whether we are in the RL or IL setting. In the IL setting, Corollary 3.9, together with equation 16, ensures that the KL-WeakFM loss controls the TV error. In the RL setting, we should add a paragraph explaining the following: - an L1 flow matching error controls the TV bound, itself bounded the L2 flow matching error controlled by the loss, which in turn depends (in a tractable way) on the trainable models $f_\rightarrow^*$, $\alpha_\rightarrow^i$ and the fixed models $f_{init},f_{term}$. More abstractly, the FM loss we use is an estimator of a twisted L2 distance of the density of the approximated flow bimeasure $F_\rightarrow \otimes \pi_{\rightarrow}$ to the affine subspace of target flow-matching bimeasures, where $F_\rightarrow = (f_\rightarrow^* + f_{term})\lambda + f_{init}\delta_{s_0}$ and $\pi_{\rightarrow}(x) = \frac{ f_\rightarrow^*(x)}{ f_{term}(x) + f_\rightarrow^*(x)} \pi_{\rightarrow}^*(x) + \frac{ f_{term}(x)}{ f_{term}(x) + f_\rightarrow^*(x)}\delta_{s_f}$. This L2 distance controls the L1 distance if we assume the state space to have a finite volume. The L1 distance is $\delta F_{init} (\mathcal S)$. As a result, the stable FM-loss controls the right-hand side of Theorem 3.8. Thus, we will add our answer to question 1 in the appendix and a paragraph in the main text to answer question 3.
Summary: This paper proposes a new family of generative flows called Ergodic Generative Flows (EGFs) which are capable for both RL and IL tasks in continuous settings. The generative flows are built upon finitely many globally defined transformations, with probable universality over continuous spaces like tori and spheres, enabling explicitly expressing and sampling from the undertraining distribution simultaneously. Further, the authors derived a novel loss, coined KL-weakFM loss for IL training where the target distributions is intractable. The proposed methods are evaluated through toy 2D experiments for both RL and IL tasks, and NASA volcano dataset for IL task. Claims And Evidence: Yes Methods And Evaluation Criteria: The training losses for RL tasks seem problematic. The "stable FM loss" referred to in Section 2 of this paper is neither the commonly used one proposed by Bengio et al. (2021) nor the one proposed by Brunswic et al. (2024). Also, the regularization term in Section 4.1 needs further explanation. Theoretical Claims: I checked the theoretical claims except for Section 3.1 because I'm not familiar with topology and differential geometry. Experimental Designs Or Analyses: Section 4.1 (RL Experiments) is confusing. Despite the issues mentioned above, the purpose of this experiment seems unrelated to the primal claims of this paper, and the presentation of the results (Figure 1(b)) is unclear. The experiments and analyses in Section 4.2 are reasonable, but I suggest the authors to present the complete results on the complete earth and climate datasets, including volcano, earthquake, flood and fire. Supplementary Material: I reviewed Appendix B, C and D. Only some small issues. Line 706: the second $\hat{f}_{term}$ should be $\hat{f}_{sat}$ Line 716: The x-labels and y-labels of Figure 4. Relation To Broader Scientific Literature: This paper extends the theoretical framework of GFlowNet for continuous settings and IL taks, providing a novel approach for generative models in continuous space. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths This paper is well-motivated and easy to follow. The proposed method seems to have a strong theoretical support and is insightful. The novel framework of GFlowNet for continuous settings and IL tasks has great potential. Weaknesses The RL part lowers the overall level of this paper. The experiments are limited. Other Comments Or Suggestions: Some notations are inconsistent or used without definition. For example $f^*_\rightarrow$ in (3), $\delta$ in (5), source and target distribution in Algorithm 1. Questions For Authors: Can the authors explain how the sampling time $\tau$ is related to the trainable parameters, the dimension of the state space and the performance of the flow network? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear moVi, We thank you for your detailed review and valuable feedback. Please find our responses to your comments below: 1. Methods and Evaluation Criteria Stable Loss: We appreciate your observation regarding the difference between our stable loss and the one proposed by Brunswic et al. (2024) in equation (19). We would like to clarify that Brunswic et al. introduce a family of stable losses in Section 4.2, denoted by $\Delta_{f,g,\nu}(\alpha, \beta)$. Our stable loss, as given in Eq. (3), is an instance of this family. Since our focus in this work is on the foundational aspects of Ergodic Generative Flows (EGF), we opted for the simplest stable loss, as it seemed more relevant for our goals. Furthermore, we experimented with Brunswic et al.'s loss on non-acyclic GFlowNets and observed little improvement, with additional difficulties in hyperparameter tuning. Regularization: Brunswic et al. introduced a cycle-killing regularization schedule in Theorem 1 to ensure convergence of non-acyclic flows toward an acyclic flow. Although they do not explicitly suggest using this regularization to stabilize losses in non-acyclic settings, it seemed a natural step for us. We note that a similar approach was also taken by Morosov et al. in their work "Revisiting Non-Acyclic GFlowNets in Discrete Environments." While we believe the first point does not require further discussion in the main paper, we will include a clarification about our architectural and hyperparameter choices in the appendix. We will also ensure the regularization approach is more clearly explained in the main text. Also there was a typo on the Bengio loss (the square was missing). 2. Experimental Designs or Analyses Purpose of RL Experiment: The RL experiment serves a dual purpose: A) testing the applicability of EGF in the original context of GFlowNets, and B) addressing limitation 4. While limitation 4 is secondary to the primary discussion on non-acyclic GFlowNets, we think it is an interesting point that warrants clarification. Additional Datasets: We agree that the inclusion of flood and earthquake results would improve the completeness of our experiments. We are currently running updated experiments to incorporate these benchmarks. However, we were unable to locate the original fire dataset due to changes in the NASA repository. 3. Supplementary Material We have corrected the issues in the supplementary material as per your suggestions. 4. Other Comments or Suggestions We will make the necessary corrections and reformulations to the notations and definitions as suggested. 5. Sampling Time and Performance - Spectral Gap and Sampling Time: The estimation of sampling time can be derived from the spectral gap and the $L_2$ norm of the target distribution. We intend to discuss these relations in a broader follow-up work focused on spectral gap control for EGF. Although this theoretical framework is still in development, the following informal results provide some insight: - A spectral gap $\eta>0$ guarantees exponential decay in $L^2$ deviation from the mean: $||(f-\int f)\cdot(\pi^*_\rightarrow)^t|| = O((1-\eta)^t).$ - To transition from an initial distribution $f_{init}$ to a terminal distribution $f_{term}$ with error $\epsilon$, we require: $||(f_{init}-f_{term})(\pi^*_\rightarrow)^t||\leq \epsilon$. - Thus, the required sampling time $\tau$ is expected to scale as: $\tau = O\left(\frac{\log(\epsilon)-\log(||f_{init}-f_{term}|| )}{\log(1-\eta)}\right)$. - One could then leverage general lower bounds on spectral gaps such as those of https://arxiv.org/abs/2306.12358 to guarantee a spectral gap scaling as $\eta \sim 1/dim$. - Furthermore, the difficulty of the objective is controlled by $||f_{init} - f_{term}||$, the co-dimension $q$ of the manifold supporting the distribution would then leads (with a bit of gaussian dequantization) to $\log||f_{init} - f_{term}|| \sim \log Vol($Ball of radius $r$ in dim $q) $ so that one would end up with $\tau = O\left(-\log(\epsilon)dim+q\log(q)dim\right)$. - However, this bound is probably very loose: in the small volume limit, the "effective" spectral gap is bounded away from 0 (we may discuss this in more details if you wish) so the term $q\log(q)dim$ becomes $q\log(q)$. - We stress the need for a proper analysis, hence a later submission. - Trainable Parameters and Sampling Time: the simplest answer is via Theorem 1 as the sampling time is directly controlled by $\int f^*_{\rightarrow}$ and the WeakFM loss via equation 17. The master Theorem 3.4 provides a control: The proof controls the $L^2$ norm of the target $\||f^*_{\rightarrow}-\int f^*_{\rightarrow} \|| = O(\||f_{init}-f_{term}\||\sum_t(1-\eta)^t) = O(\||f_{init}-f_{term}\||\frac{1}{1-\eta})$. One could link some measure of expressivity of neural network to such a bound, we did not do it at this stage as we would like to reserve it for a future work (as mentioned above).
null
null
null
null
null
null
Does Generation Require Memorization? Creative Diffusion Models using Ambient Diffusion
Accept (poster)
Summary: This paper explores the use of noisy images to mitigate memorization issues in diffusion models, examining the tradeoff between fidelity and memorization. Under the assumption of normality in the data distribution, the authors analyze the problem of information leakage. They also investigate the memorization issue within a framework similar to that proposed by Feldman (2020). Empirical studies are conducted to better understand the fidelity–memorization tradeoff by introducing noise at different scales to the samples. Claims And Evidence: All the theoretical results are accompanied by proofs. And the empirical studies' conclusions are aligned with the provided empirical results. Methods And Evaluation Criteria: Yes. Theoretical Claims: I checked those in section 4.1, which seem right. I am not familiar with the framework proposed by Feldman in section 4.2.1 but the results seem to be aligned with the intuition. Experimental Designs Or Analyses: The results in Table 1 are not well explained. For example, what is S. And for S>0.9, are they percentages of samples above the threshold? How are they computed? In addition, you mentioned that you played with t_n to obtain comparable FIDs of DDPM and yours, but the true values of t_n are not provided. In addition, could you also include more details on how the models are trained? E.g., have you used any clean images, etc. Currently, the relationship between FID and memorization issues and the selections of t_n remains unclear. Also, the Pareto frontier is not defined in the paper. Overall, it seems to me that the empirical studies are not comprehensive enough to give readers a good understanding of what the authors tried to present. Maybe more raw data and discussions on them should be made. Supplementary Material: N/A Relation To Broader Scientific Literature: While the model is trying to address the memorization problem of the diffusion models, the work can be framed as using corrupted images to train a diffusion model to estimate the ground truth data distributions. This problem can be seen as a special case of the more general deconvolution theory in statistics. Although not discussed in this paper, the memorization problem can also be tackled through the differential privacy-based method. Essential References Not Discussed: It would be beneficial to include some work discussing the use of differential privacy-based methods to train generative models. Other Strengths And Weaknesses: Strengths: Solving the memorization problems of the generative models is crucial. Solving this problem can potentially address copyright infringement problem, data providers' privacy concerns, etc. Weakness: 1. I do not think the paper is well-organized and gets its theoretical results hinged with its empirical studies. The authors should have provided more intuitive descriptions of their theoretical work and how they together justify training diffusion models using generative models to tackle the memorization problems. In the current version, this kind of discussion is lacking, and it is difficult for the readers to see how the theoretical results are connected and how the empirical results support the theoretical claims. 2. Regarding the theoretical work, Lem 4.2 assumes that x0 ~ N(mu, sigma), which might be too strong. While I understand that the proof might be very challenging for a general data distribution, the authors should discuss why the current selection is sufficient to give a good understanding of the general case. 3. For the results in section 4.2, the assumed settings are not justified. In particular, I do not see the benefits of a mix of N distributions in understanding the memorization problem of the diffusion models. Besides, the derived results are not tailored for the diffusion model (i.e. the losses); as a result, it is somewhat questionable to what extent the theoretical results could characterize the diffusion models' memorization problem. Other Comments Or Suggestions: 1. Eq (5) has the leading term -x. It seems like some discounting factor is missing. 2. The appearance of the formula at the bottom right of page 3 should be optimized. 3. Alg 1 should be included in the main text. (or a concise version should be included at least.) 4. Section 4.2 could be reorganized. Currently, readers might find it challenging to understand the main purpose of the section, even if a section overview has been provided. 5. Table 1 and Table 2 require more clarification. Questions For Authors: Please the Other Strengths And Weaknesses section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: From the questions raised, it appears that there are some major misunderstandings. In what follows, we do our best to clarify them, and we urge the reviewer to reread some part of our work and reassess their evaluation. In our end, we will do our best to improve the presentation based on the Reviewer's feedback. **"Have you used any clean images?"**: Yes, we use clean data but only for times $t\leq t_n$ of the diffusion, where $t_n$ is a hyperparameter to be controlled. For times $t\geq t_n$, we only use corrupted data and the Ambient Score Matching objective. The corruption to time $t_n$ happens once, before training, to ensure that there is no information leakage about the clean images at these times. **"what is S?"**: The meaning of S is explained on page 7, Lines 377-382 in the first column. To reiterate, for each generated image, we measure its similarity, S, (inner product), with the nearest neighbor in the dataset. Tables 1 (and Table 2) report percentages of the dataset that have similarity S greater than certain thresholds. In our rebuttal, we further report mean and median values -- see response to Reviewer Einp. **"the relationship between FID and memorization issues and the selections of $t_n$ remains unclear"**. Figure 1 shows the achieved FID and memorization for different values of $t_n$. There is an extended discussion on the choice of $t_n$ on page 7 of our submission. **"but the true values of t_n are not provided"**: this is a critical misunderstanding. We have access to clean data, and we intentionally corrupt it for some diffusion times to avoid memorization. Please refer to Algorithm 1 and Section 3 (Method, page 4) of our submission, where we verbally explain this algorithm. See also Choices for $t_n$ on page 7. **About differential privacy-based methods**: This is a great remark. We will add this discussion. Briefly, the notion of privacy is stronger than the notion of memorization. Privacy guarantees would mean that any adversary that has access to the model could not extract samples used for the training of that model. Our method does not guarantee that: we show that sampling with our model leads to outputs that resemble less to the training points. That does not exclude the possibility that an adversary with access to our model could recover the training points. Since we have relaxed expectations/guarantees, we also enjoy high performance. **On the Gaussian assumption in Lemma 4.2**: The goal of Lemma 4.2 is to motivate why the ambient optimal score solution ``leaks” less information at time $t_n$ compared to running vanilla DDPM from time $T$ until time $t_n$. While we believe that it holds for more general distributions, generalizing the result comes with some technical challenges and does not add much additional value to our goal of motivating why the ambient optimal score solution leaks less information. **Section 4.2**: Building a theoretical framework for understanding memorization in machine learning is quite challenging, and there is limited amount of work trying to address it. The seminal work of Feldman provides a natural instance (with some arguable caveats) of the learning problem where optimal classification requires memorization. Our work aims to take a step in understanding memorization in diffusion models, not by solely using Feldman’s framework but by taking a two-step approach: First, we adapt the setting of Feldman (which is a well-accepted) to more general loss functions (capturing e.g., standard score matching losses). The message of this general framework is roughly speaking the following: if the distribution of the frequencies of the subpopulations is heavy-tailed, then to achieve optimal generalization loss, the algorithm has to make the loss at all the points seen exactly once zero; otherwise, it pays a penalty \tau_1 for any not fitted point. Second, to make use of this general framework, we rely on how diffusion models are trained. We view the above general framework in many noise scales. As we explain e.g., in Lemma 4.5, even if the original dataset has the heavy-tailed structure of Feldman’s model, as the noise scale increases (which is the core idea of diffusion-based generation), the tails will become lighter since the populations start to merge. This is where our framework is specialized in diffusion models and not in Theorem 4.3. Combining the above two observations, the theoretical part of Section 4.2 gives evidence that avoiding memorization in the high-noise regime is feasible. Our main contribution in this part is the general observation that the tail of the distribution of the frequencies depends on the noise scale, and this is a property that depends crucially on how diffusion models are trained. This illustrates our conceptual contribution on why, for the high-noise regime, memorization could be avoided. We hope this rebuttal clarifies things and we remain at the Reviewer's availability if there are further concerns. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed clarifications. Most of my concerns are addressed. I have a few comments based on your responses: 1. Regarding "what is S?". I believe this confusion was largely caused by some abuse of the notations. For example, S denotes a finite amount of examples in 2.2. While in Table 1, it becomes similarity and you also used S. Then in Table 2, it becomes Sim. On page 7, Lines 377-382, I was aware of the definition, but the paper never introduced a dedicated notation for it. Regardless, this is a writing issue and I wish authors could make them consistent. 2. Regarding differential privacy, memorization and the discussion related. Thanks for the clarifications and it now makes sense. So the proposed method is only designed to alletivate the memorization problem instead of a protection of sensitive data (as the model still needs all clean versions of the dataset.) It will be helpful if you can put a note regarding this point as we have seen a sequence of work by Daras et al selling a similar idea from that perspective. 3. Regarding Lemma 4.2, as other reviewers also commented, while I could guess the motivation, the current way you present it is kind of misleading as the tone selling it appears like you assume a similar setting to Lemma 4.1. You may either consider extending it to the Lemma 4.1's setting or add addtional comments to better explain your motivation. 4. For the comments on Section 4.2, I like explanations, which make everything much clearer. It will be great if you can integrate them in the revision. I increased the score, although I still believe significant revisions to the presentation are needed. --- Reply to Comment 1.1.1: Comment: Thank you for raising your score and for your feedback! Regarding S, we acknowledge that the notation used could have been better. We will put a lot of effort into improving the presentation for the camera-ready to avoid confusing the reader. We will incorporate the discussion on memorization vs differential privacy, add additional comments for Lemma 4.2, and incorporate the discussion from the Rebuttal into Section 4.2. Thank you for your comments and we will make sure to significantly improve the clarify of the presentation for the camera-ready.
Summary: This paper addresses the issue of memorization in diffusion models, proposing a framework to reduce memorization while maintaining high-quality image generation. The authors introduce a simple method that utilizes noisy images to learn the initial portion of the generation trajectory, followed by high-quality images for the final portion. This approach effectively prevents memorization when forming the high-level structure of the generated image, while still allowing for high-quality, detailed features from the training images. - The authors support their method with theoretical results that quantify the information leakage from the training set, demonstrating it to be smaller than standard approaches. They also extend existing results on memorization and generalization error from the classification literature to the context of diffusion models. - They also empirically validate their approach, showing that it achieves a better tradeoff between memorization and quality compared to standard DDPM, and does a better job at preserving the quality of generated images compared to existing approaches for preventing memorization in diffusion models. Claims And Evidence: - The claims made in the submission are generally supported by clear evidence. The authors provide theoretical results, including Lemmas 4.1 and 4.2, which characterize the sampling distribution and compare mutual information between their method and standard diffusion. I have read the proofs of Lemmas 4.1, 4.2, and Theorem 4.3, and they seemed sound. - The experimental results demonstrate the effectiveness of their approach in reducing memorization while maintaining image quality, as measured by FID scores and similarity metrics. - While their theory does not completely speak to the success/guarantees of their particular method (for instance the information leakage guarantees hold only for ambient diffusion, and not their two-stage concatenated method), the authors are careful to point out these limitations. Methods And Evaluation Criteria: The methods and evaluation criteria appear appropriate and well-justified for the problem at hand. The authors use established metrics such as FID for image quality and nearest neighbor similarity scores for measuring memorization. The authors explore the effects of different t_n values and also compare against existing baselines for memorization prevention, in both cases looking at datasets of varying sizes, thus providing a thorough evaluation of the performance of their method. Theoretical Claims: - The information leakage result (Lemmas 4.1/4.2) is a nice accompaniment for the proposed method, though I found it a bit unclear at first that the statement is about only a single point, and would have preferred that to be made more clear before the lemma was presented. - The results in Section 4.2 make sense, but upon inspection of the proofs, it doesn't seem like they use anything specific to diffusion models. It would be useful for the authors to comment on if/how the proofs are substantially different from Feldman's results. - In a similar vein, I found the statement of Theorem 4.3/B.2 to be a little hard to parse at first, especially when trying to compare to the analogous statement from Feldman, which compares the error to the optimal generalization error. Based on my understanding, the result implies this, but I think it might make it clear to add some discussion and/or changing how the theorem is presented. Experimental Designs Or Analyses: The experimental designs appear sound, thorough, and well-explained. The authors thoroughly investigate the memorization-quality tradeoff of their method along a number of axes, i.e. tuning the t_n parameter, varying dataset size, comparison to existing anti-memorization baselines, and extending their method to text-conditional generation. Supplementary Material: I took a brief look at the appendix while parsing the theoretical results section, and found it to have clarifying discussion and did not uncover any soundness issues. Relation To Broader Scientific Literature: This work builds upon and extends existing literature on memorization in machine learning, particularly Feldman's work on heavy-tailed distributions. It also contributes to the growing body of research on diffusion models and their properties. It is notable that their algorithms produce state-of-art results in terms of the trade-off between memorization and quality for both image generation and text-conditional generation compared to prior work. Essential References Not Discussed: I am not aware of any essential references that are missing. Other Strengths And Weaknesses: - For the most part, the paper is clearly written and easy to follow. - I appreciate that their approach is simple, yet achieves convincing improvements wrt the tradeoff between memorization and quality. - I found the appendix discussion of the theoretical results far easier to follow than the more informal discussion presented in the main portion. I understand that this may be due to a space constraint, but it would be worth taking a second look at the main portion to see how the discussion can be clarified. Other Comments Or Suggestions: - Several instances of straight quotes (") instead of proper LaTeX quotes (`` and '') appear throughout the paper, resulting in backwards-facing opening quotes. - There are some grammatical issues that should be addressed, such as the phrase "How much heavy-tailed is the distribution" on lines 325-326. - Lemma D.1 seems to have a typo as I assume the statement should be about I(A;S) rather than I(D;S). Questions For Authors: - As noted above, I would appreciate hearing from the authors in what ways the analysis presented in Section 4.2 is substantially different from [Feldman, 2020] Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for careful reading, for their feedback and suggestions! **Lemma 4.2 being about a single point**: We preferred to present the statement for 1 point since it already presents the advantage of ambient diffusion compared to vanilla DDPM: Given a generation of m points from each of the two models, the mutual information is larger. We agree with the comment that a larger training set is desirable, and this is why we present such a result in the Appendix (we refer to it right after Lemma 4.2). **On the difference from Feldman’s work**: We believe that our main contribution in this part is the general observation that the tail of the distribution of the frequencies depends on the noise scale. This observation is specific to the way that diffusion models are trained and does not appear in Feldman’s work because it focuses on the multiclass classification. To make this observation more rigorous, we adapted Feldman’s framework with a more general loss function (which should be independent of the mixing coefficients): in fact, this adaptation is not technically challenging and we do not claim novel technical challenges; it is instead a generalization of the previous proof. However, the novelty comes from a conceptual perspective. As the noise level increases, the penalty \tau_1 in the Informal Theorem 1 is decreased because the tails become lighter. To summarize, this part of the paper illustrates our conceptual contribution on why for the high-noise regime, memorization could be avoided. **On the readability of appendix and main body**: We thank the reviewer for that comment. Indeed, the lack of space made the presentation a bit dense. We will use the extra page in the Camera Ready to improve the presentation of the work. We hope that our response addresses any remaining concerns and that the Reviewer will support the acceptance of our submission. We remain at the Reviewer's availability in case there are additional concerns to be addressed.
Summary: The paper investigates the links between good performance of generative models (i.e., low FID) and memorization of the training set. in particular, the paper investigates the Pareto front. of performance vs memorization. Based on a recent "ambient score matching loss" [1], the paper introduces a new diffusion technique with good performance and less memorization. [1] DARAS, Giannis, DIMAKIS, Alexandros G., et DASKALAKIS, Constantinos. Consistent diffusion meets tweedie: Training exact ambient diffusion models with noisy data. arXiv preprint ICML 2024 Claims And Evidence: Yes Methods And Evaluation Criteria: In figure 2, how exactly do you define the "number of duplicates"? In [2] they use the cosine similarities between the synthetic image and the closest one from the training set. Maybe the mean/median of this cosine similarity could be a better than "number of duplicates", as done in hte experiment section as I understand. - Would it be possible to have the same kind of plot with the Dinov2 FID between the generated set and the test set? [2] Kadkhodaie, Zahra, et al. "Generalization in diffusion models arises from geometry-adaptive harmonic representations." ICLR (2024) - **I did not understand the proposed algorithm**. How does the sampling process work exactly? You have 2 different time scales depending on how advanced the denoising process is. Theoretical Claims: - IMO the theoretical results lacks formality: I was not able to follow Section 4. It might be useful to properly encapsulate in an environment and explain one highlighted result. Experimental Designs Or Analyses: In the experimental results, the dataset ranges from 300 to 3k, does the proposed method also provide improvement for larger datasets? I.e. with the full dataset? Supplementary Material: I checked the additional experiements Relation To Broader Scientific Literature: Good Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable reviews and suggestions! **On no of duplicates in figure 2**: By the no of duplicates, we mean the percentage of generated samples whose similarity to their nearest neighbor in the training set is greater than 0.9. The nearest neighbor is found using the cosine similarity in the DINO space. As per the reviewer’s request, we also calculated the mean and median of the similarity in the same setting as Figure 1 (300 training samples from FFHQ). We will include these results in the updated version. | Method | Mean | Median | |------------------------|:-------:|:-------:| | Baseline | 0.8826 | 0.8931 | | Ours ($\sigma=0.1$) | 0.8854 | 0.8748 | | Ours ($\sigma=0.25$) | 0.8518 | 0.8491 | | Ours ($\sigma=0.3$) | 0.8473 | 0.8475 | | Ours ($\sigma=0.4$) | 0.8287 | 0.8292 | | Ours ($\sigma=0.5$) | 0.8060 | 0.8069 | **On FID using Dinov2**: As per the reviewer’s suggestion, we performed additional experiments to evaluate the quality of our model with FID using Dinov2. We will add these results with a plot in the updated version. | Method | FID | FD_{DinoV2} | |------------------------|:-------:|:-------:| | Baseline | 16.21 | 353.19 | | Ours ($\sigma = 0.1$) | 14.94 | 347.21 | | Ours ($\sigma = 0.25$) | 15.05 | 344.60 | | Ours ($\sigma = 0.3$) | 16.14 | 358.23 | | Ours ($\sigma = 0.4$) | 19.55 | 371.81 | | Ours ($\sigma = 0.5$) | 23.73 | 382.03 | **On clarification about the sampling process**: Our method is a training-time mitigation strategy, and therefore, no modifications are made during the sampling. For $t\geq t_n$, we replace the original dataset with a noisy version of it where each datapoint has been replaced (*once*) with a noisy realization. This step corresponds to the creation of $S_{t_n}$ at step 1 of Algorithm 1 (Appendix page 12). The creation of $S_{t_n}$ is very important as it ensures the reduced information about the clean distribution for the learning at $t\geq t_n$ (see Lemma 4.2). This step leads to decreased memorization. For $t\leq t_n$, we train as usual. We will clarify this in the updated version. **On theoretical results**: We will do our best to improve the presentation of our results, using the one additional page for the Camera Ready. Here we provide an intuitive explanation of the results, and will include such a discussion in the updated version to improve the readability. Lemma 4.1 proves that the optimal score learned at time $t_n$ is a gradient field that points towards the noisy training set. This lemma is formal and can be seen as the analogue of the optimal score solution for standard DDPM in a noiseless dataset. Lemma 4.2 motivates why this solution ``leaks’’ less information at time $t_n$ compared to running vanilla DDPM from time $T$ until time $t_n$. This property is formalized using the mutual information. Additionally, we needed the Gaussianity assumption to make it formal since for the Gaussian, we can derive explicit expressions for mutual information. Section 4.2 explores connections between our work and the work of Feldman, which focuses on the multiclass classification. Theorem 4.3 presents the analogue of Feldman’s result with a more general loss function. However, the message is essentially the same: the more heavy-tailed the distribution of the frequencies, the higher the penalty of not memorizing (in our setting, memorization corresponds to making the score objective small). This is formally presented in Lemma 4.4, which comes from the work of Feldman and shows that the penalty term \tau_1 depends on the tail of the distribution of the mixing coefficients. Finally, we illustrate in Lemma 4.5 that the noise scale of the diffusion process affects the parameter \tau_1 since, as the noise increases, the subpopulations merge and the mixing coefficients add up (hence becoming lighter). To make this intuitive argument fully formal, we provide the result for the most standard mixture model, namely Gaussian mixture, and the result should extend to any mixture family since noise addition is a contracting operation. **On larger datasets**: For the unconditional image generation setting, memorization becomes negligible for larger datasets. Per Reviewer’s Request, we trained on a slightly bigger dataset. For FFHQ 5k samples, we get the following results: | Method | FID | S > 0.85 | S > 0.875 | S > 0.9 | |---------|:-----:|:---------:|:---------:|:---------:| | Baseline | 6.4 | 21.58 | 4.98 | 0.46 | | Ours | 6.4 | 20.08 | 4.53 | 0.42 | As seen, our method is still a little better in terms of memorization with the same FID as the Baseline, but the improvement is smaller compared to more data-limited settings. For larger datasets, memorization is still an issue if there is text-conditioning. Our results in Section 5.2. show improvements over the baselines for this regime as well. We hope our rebuttal clarifies the Reviewer's concerns and that the Reviewer will strongly support the acceptance of our work.
Summary: In this paper, the authors propose a simple but effective method for the diffusion model to generate creative images rather than memorizing the training data. Their method is motivated by previous work of (Feldman, 2020) about the generalization in classification problems, where they showed that the model tends to memorize when the distribution of the frequencies is heavy-tailed. So they conjecture that the high noise added to the images in the diffusion model training process would force different subpopulations to start to merge and the heavy tail to disappear. Based on this intuition, they show that the memorization in diffusion model is only necessary in the low-noise region, and they use noisy data to train the model in the large-noise region. Some theory and experiments are provided to support their claims. Claims And Evidence: Most of their claims are supported by evidence. But I found some claims confusing: * In the title of Figure 3, why does the "blurry" output stand for generalization? Can we view this as a quality degradation? (The authors have addressed this on their rebuttal.) Methods And Evaluation Criteria: I think the proposed method and the evaluation make sense for investigating the memorization of the diffusion model. Specifically, they validate the method on both the unconditional generation task and text-conditioned generation. The CIFAR-10, FFHQ, (tiny) ImageNet, and LAION are used to test the memorization. Theoretical Claims: No, I didn't carefully check the proof in the appendix. But their explanation in the main paper make sense and they provide some intuition to help me understand. Experimental Designs Or Analyses: Yes, I reviewed the algorithm and the experimental results. Supplementary Material: I review the generated human face images in the Appendix. Relation To Broader Scientific Literature: The proposed method involves two parts: * For t>tn, they train the diffusion model using the Ambient Score Matching loss (Daras et al., 2024b) * For t<tn, they train the diffusion model using DDPM. It seems that you combine two existing methods and apply them on different noisy scales. Could you reclaim your algorithm novelty? Essential References Not Discussed: No. Other Strengths And Weaknesses: Please refer to other parts. Other Comments Or Suggestions: I think the authors should have more discussion on related work in mitigating the memorization of diffusion models. There have been many interesting works in mitigating memorization, and you should discuss them in your work, although you have different techniques with them. Some works in understanding and/or mitigating memorization that I know: * On the Interpolation Effect of Score Smoothing. https://arxiv.org/abs/2502.19499 * Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention. https://arxiv.org/abs/2403.11052v1 * Taking a Big Step: Large Learning Rates in Denoising Score Matching Prevent Memorization. https://arxiv.org/abs/2502.03435 Besides, I also suggest that you compare your method with those baselines in your experiments to validate your superiority. Questions For Authors: Can your method be integrated with other diffusion methods such as EDM and DDIM? If yes, I would strongly recommend the authors to also apply your method on those diffusion piplines and verify the performance gain. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their Review and suggestions. **On the blurry output of the diffusion models**: Figure 3 contains a noisy image $x_t$ and the learned model’s approximation of $E[x_0 | x_t]$. Intuitively, this expectation represents an average over all possible clean images $x_0$ that could have produced the noisy image $x_t$. As a result, the predicted image is expected to appear blurry. This is validated by the second row (outputs of a diffusion model trained on 52k images): a model trained on the full dataset has blurry predictions, as it should. On the other hand, when a model memorizes the training data, it is not modeling correctly the conditional expectation – instead, it is overconfident, i.e., it believes that one of a few clean images could correspond to a given noisy input $x_t$. In this case, it outputs a sharper (cleaner) image, which reflects memorization. The figure shows that our algorithm (Row 4) matches the blurriness level of the oracle model (52k training images) even though our model is only trained with 300 images. **On combining two existing algorithm and novelty**: The novelty of the algorithm is not in the loss, but in the way we create the data we apply the loss to. Let us clarify further. For times $t\geq t_n$, we replace the original dataset with a noisy version of it where each datapoint has been replaced (*once*) with a noisy realization. This step corresponds to the creation of the set $S_{t_n}$ at step 1 of Algorithm 1 (Line 612, Appendix Page 12). The fact that we create $S_{t_n}$ once before the launch of the training (instead of recreating at each epoch) is very important as it ensures that there is reduced information about the clean distribution for the learning that happens at times $t\geq t_n$ (see also Lemma 4.2). This step in the algorithm leads to decreased memorization. Comparison with existing works: - In the papers “Consistent Diffusion Meets Tweedie” and “How Much is a Noisy Image Worth?”, the authors are dealing with corrupted datasets and they are using Ambient Score Matching loss to train with the corrupted data. - In DDPM, clean data is used for all times using the regular objective. - In this paper, we have clean data, but we intentionally corrupt them for some times to reduce memorization. At the same time, we do not want to reduce performance, so we still use the clean data for certain diffusion times. The idea of intentional corruption (and the way its done in Algorithm 1) so that we achieve both reduced memorization and good performance is the algorithmic novelty of this work. **On the related work and additional baselines based on those works**: We want to bring to the Reviewers’ attention that **two out of the three papers that we are being asked to compare against were released last month** (26 Feb'25, and 5 Feb'25). We submitted this work in January 2025, so we do not think it is fair to compare against these two baselines. Besides, **neither of these two works has experiments with images**. That said, we are happy to cite and discuss these works in the updated version, and we will do so. We thank the Reviewer for bringing the paper “Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention” to our attention and will cite it in the updated version. We attempted to compare against this method, but the code for training-time memorization mitigation is not provided in the official repository. We even tried implementing the approach ourselves, but couldn’t replicate their results. We contacted two of the authors about the issue, but as of today we have not received a reply. If the code becomes available, we will gladly compare. **On integrating the method with other diffusion methods**: Our method can work with any training architecture, training loss type (e.g., x0-prediction, epsilon-prediction, flow-matching, etc), and sampling method (EDM sampler, DDIM, DDPM sampler, etc). All the results in the paper are obtained by using the EDM codebase for all these components (architecture, training loss, sampler). As per Reviewer’s request, we experimented with other pipelines, and we present the results for (FID, Memorization) below when the training is performed using 300 training images for the FFHQ data set. | Method | FID | Memorization (S > 0.9) | Memorization (S > 0.875) | Memorization (S > 0.85) | |-------------------------------|-------|:------------------------:|:-------------------------:|:-------------------------:| | Baseline(ddim) | 16.34 | 41.62 | 49.8 | 58.14 | | Ours (ddim) | 16.48 | 23.76 | 33.58 | 43.60 | | Baseline (iddpm) | 16.83 | 45.54 | 53.74 | 61.82 | | Ours (iddpm) | 16.46 | 26.94 | 37.70 | 47.94 | | Baseline (ddpm (sde sampler)) | 16.73 | 42.37 | 51.91 | 59.48 | | Ours (ddpm (sde sampler)) | 16.35 | 24.34 | 36.28 | 46.04 | In the light of these new experiments and clarifications, we hope that the Reviewer will support the acceptance of our submission. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. I thought the pictures in Figure 3 were the final generated outputs of the diffusion model, so I had that concern. Thanks for your clarification that the picture is the one-step output of the denoiser (E[x_0|x_t]), so the results make sense. The blurry output indicates that the denoiser is combining multiple faces, and it learns the real underlying distribution rather than just memorizing training images. Maybe it would be more clear if you added some detail (E[x_0|x_t]) to the title of Figure 3. And I look forward to more comparison with the related work in your paper if possible in the future. The authors have addressed my concerns, so I increase my score to 3. --- Reply to Comment 1.1.1: Comment: Thank you for raising your score and for your valuable comments! We will update the title of Figure 3 as requested, and we will compare with the baseline you mentioned as soon as the code becomes available.
null
null
null
null
null
null
Benign Samples Matter! Fine-tuning On Outlier Benign Samples Severely Breaks Safety
Accept (spotlight poster)
Summary: This paper is subsequent work following (He et al, 2024) .The paper proposes a better benign fine-tuning attack based on the influence function techniques. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Fine-tuning attack on benign data is making perfect sense because benign data are hard to be filtered out by Guardrail moderation. And it also makes sense that a subset of benign data can compromise safety more serious, as it is not uncommon to see in other problems (tasks) that a smaller dataset works better than a larger one. Theoretical Claims: Yes. The deduction in (1)- (5) is correct, though this theoretical claims are established by an existing paper. Experimental Designs Or Analyses: Yes. However, more results can be provided. See Weakness for details. Supplementary Material: Yes. I read all of them. Relation To Broader Scientific Literature: Previous work (Qi et al.2023) first show that benign fine-tuning can compromise safety. Previous work (He et al, 2024) show that the attack performance of benign fine-tuning can be increased by curating a subset among the avialble benign dataset (e.g., Alpaca) This work show that the attack performance of benign fine-tuning can be further enhanced by an alternative data curation method. The authors conduct extensive empirical experiment to jusitfy that their data curation scheme is better than (He, et al, 2024). Essential References Not Discussed: * A con-current work Virus[1] is also focusing on harmful fine-tuning attack aiming to bypass guardrail moderation. [1] Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation[J]. arXiv preprint arXiv:2501.17433, 2025. I am aware that the authors have no obligation to discuss the con-current work. However, I would suggest the authors to add a section to discuss it because they are actually aiming to solve the very same problem, but in two very different directions. Specifically, OpenAI and other service provider recently adopt guardrail moderation to filter out harmful samples for their fine-tunig API. However, it is obvious that this ad-hoc solution is problematic, and there recently there is a rise of paper working on designing a better attack to it. The design directions mainly on three main directions. 1. How to make benign data **stronger** in attacking the safety aligned model? (Qi et al,2023) first show that benign fine-tuning attack can also compromise safety. However, it is shown by subsequent work that such benign fine-tuning attack is not as successful as attack with harmful data. (He et al, 2024) show that one can sample stronger “benign data” that can better attack the safety aligned model. 2. How to make harmful data **stealthier** to bypass the guardrail modeation? (Halawi et al,2024) is the first attempt working on this problem. However, the main weakness of this paper is that by adopting their attack, the users in the testing time need to cipher the harmful quesitons into human-unreadable text and decipher the harmful answer transmitted from the server. This paradign actually limits the use case of the harmful fine-tuning attack because the answers from the server is actually is not human readable harmful answers. To address this issue, Virus[1] is a subsequent attempt, which aim to jailbreak the guardrail detection and poison the victim model directly. I believe there are more papers coming out alone these two lines of research, and it is necessary to explicitly inform the readers that there are two lines of research endeavorscurrent going on. * Some other relevant work on the same topic are missing from discussion: Lora fine-tuning efficiently undoes safety training in llama 2-chat 70b Identifying and Tuning Safety Neurons in Large Language Models Targeted Vaccine: Safety Alignment for Large Language Models against Harmful Fine-Tuning via Layer-wise Perturbation Assessing the brittleness of safety alignment via pruning and low-rank modifications Mitigating fine-tuning jailbreak attack with backdoor enhanced alignment Do as I do (Safely): Mitigating Task-Specific Fine-tuning Risks in Large Language Models Safety Layers in Aligned Large Language Models: The Key to LLM Security SEAL: Safety-enhanced Aligned LLM Fine-tuning via Bilevel Data Selection Safety Alignment Shouldn't Be Complicated SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation Towards Secure Tuning: Mitigating Security Risks Arising from Benign Instruction Fine-Tuning Safety-Aware Fine-Tuning of Large Language Models Defending Against Unforeseen Failure Modes with Latent Adversarial Training A safety realignment framework via subspace-oriented model fusion for large language models Safe lora: the silver lining of reducing safety risks when fine-tuning large language models Locking Down the Finetuned LLMs Safety Probe before You Talk: Towards Black-box Defense against Backdoor Unalignment for Large Language Models Separate the Wheat from the Chaff: A Post-Hoc Approach to Safety Re-Alignment for Fine-Tuned Language Models NLSR: Neuron-Level Safety Realignment of Large Language Models Against Harmful Fine-Tuning Panacea: Mitigating Harmful Fine-tuning for Large Language Models via Post-fine-tuning Perturbation On Weaponization-Resistant Large Language Models with Prospect Theoretic Alignment No two devils alike: Unveiling distinct mechanisms of fine-tuning attacks Your Task May Vary: A Systematic Understanding of Alignment and Safety Degradation when Fine-tuning LLMs On Evaluating the Durability of Safeguards for Open-Weight LLMs Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities Defending against Reverse Preference Attacks is Difficult Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models PEFT-as-an-Attack! Jailbreaking Language Models during Federated Parameter-Efficient Fine-Tuning Other Strengths And Weaknesses: **Strength** 1. Paper is easy to read. 2. The performance is better than existing SOtA solution by (He et al, 2024) 3. The attack is tested against SOTA defense against harmful fine-tuning. The authors show that their attacks are effective even though defense such as safeInstrc or Lisa is adopted. The evaluation is really impressive because some of the defense just appear very recently. 4. The code is well organized with runnable script. This make it easier to reproduce the results. 5. The finding that short answer benign sampels sriously downgrade safety but fine-ting only on them compromise utility is novel. I also agree the authors explanation with shallow alignment hypothesis. **Weaknes** 1. Evaluation can be more comprehensive. As all the results in Table 1 are using GPT for evaluation, I suggest the authors to add an experiment on GSM8k, which at least does not rely on GPT score to evaluate the utility. 2. The utility is downgraded very significantly after fine-tuning. This raised concern on whether the fine-tuning enable the models to effectively learn the downstream task. 3. The experimental setting is strange. From table 1, all the utility after fine-tuning is downgraded, even under full benign fine-tuning. This is in contrast of the goal of fine-tuning. We want the model to be better (in terms of at least a sub-task) instead of making the model performing worst. Therefore, I recommend the authors to experiment on GSM8K and plot the same information in Table 1. The utility of GSM8K should increase after fine-tuning on GSM8k samples. 4. The logistical flow of the attack method design is strange. The authors seem to test the self-influence score (Pruthi et al., 2020) in the benign fine-tuning problem context without much intuituion. Before experiment, it is unknown and without any intuition why the outlier samples actually compromise safety. After experiment, the authors do discover that the outlier samples are data with shortest answers, and subsequent associate the success of attack to shallow alignment hypothesis. However, such a way to design method might be problematic and also risky. As pointed out by my next question, it seems that the reason that the attack method work is just a random coincidence with other factors. 5. (Major!) A subsequent question is that: is the outlier score really useful? Let's say there exists another dataset with most data with short answer and few data with long answer. Based on the outlier score, it should be the long answer data being selected. Then will the attack method be useful? I think the answer is probably no, because the authors already showcase that it is the short-toekn samples that is capable of degrading safety alignment of LLMs, but not the long one. If the answer is yes, then it actually contradict your own finding "Influence of Samples with Short Token Lengths." in Page 4. I looks forward to more discussion on this issue. It seems that the finding and method design in this paper is contradicting each other. 6. It is not obvious how the proposed method advance existing method BA(He et al,2024). I only see two group of data in Table 1. One group of HS on Alpaca is even worst than BA. 7. (minor) The method seems to be a direct application of (Pruthi et al., 2020) without much modification to the current problem context, and therefore novelty is not that significant. I don't think this is very important for acceptance of the paper as long as the method is useful. However, I do think that this is an important criterion for a outstanding paper because a novel method can give insight and have more influence to other sub-fields, which apparantly this paper is lacking. I don't expect the authors to solve this concern but just want to point it out here. Other Comments Or Suggestions: I suggest the authors to name "Mitigation against Malicious Fine-tuning" in Page 3, line 130 to "Mitigation against Harmful Fine-tuning", and also replace malicious fine-tuning with harmful fine-tuning in any other places of the paper. My justification is as follows: 1. The name of harmful fine-tuning attack is first used in [2], first available in Feb 26, 2024. The name of "malicious fine-tuning" is first used by [3], first available in June 28, 2024. I guess the concept that the authors want to express is exactly the same with the concept proposed in [2]. In terms of first appearance time to the public, the concept of harmful fine-tuning attack is earlier. [2] Immunization against harmful fine-tuning attacks [3] Covert malicious finetuning: Challenges in safeguarding llm adaptation 2. The concept of harmful fine-tuning attack is explicitly defined in Section 2 in [2]. In contrast, in [3], the concept of "fine-tuning threat model", instead of "malicious fine-tuning", is defined in their Section 2. 3. There are already a line of work using the name of harmful fine-tuning attack, and are accepted already (so they cannot change their titles). I list out a few as follows: Immunization against harmful fine-tuning attacks EMNLP2024 Vaccine: Perturbation-aware alignment for large language model aginst harmful fine-tuning NeurIPS2024 Lazy safety alignment for large language models against harmful fine-tuning NeurIPS2024 Representation noising effectively prevents harmful fine-tuning on LLMs NeurIPS2024 Booster: Tackling harmful fine-tuning for large language models via attenuating harmful perturbation ICLR2025 NLSR: Neuron-Level Safety Realignment of Large Language Models Against Harmful Fine-Tuning AAAI2025 Questions For Authors: 1. For evaluation on Lisa, are you using the Bianchi safety samples or the Bevertails original alignment samples? I also observe a similar issue with Bevertails original alignment dataset --its alignment performance is simply not ideal. Later in the RepNoise paper, (Rosati, 2024) offers a better alignment dataset. See the refusal column in https://huggingface.co/datasets/anonymous4486/repnoise_beavertail 2. In terms of the statement "bi-state optimization strategy has limited effectiveness in suppressing the harmfulness of the benign samples", I find that this statement contradict the results in the Lisa paper. In page 20, Table 15 of the Lisa paper, they do perform an exepriment against Bi-directional Anchoring (He et al., 2024) and their results show that Lisa is quite effective against fine-tuning with benign sampling, with or without advanced data selection of Bi-directional Anchoring. I conjecture the difference of results are due to some different experimental settings, but I need more investigation for figuring out the exact reason. I hope the authors can mention here that the evaluation results are different in the two papers. Future readers might be interested in this discrepancy. 3. I also appreciate if the authors can give some insights of why benign fine-tuning attack seems to be stronger in the testbed of (He et al,2024) (same testbed with this paper) but fails in Lisa's testbed. What is the key factor that enables/disables the success of benign fine-tuning attack? My conjecture is that it is because of the adoption of base model (Llama-chat vs Llama-notchat), evaluation method (GPT score vs. Bevertails moderation) or training method (LoRA vs Full fine-tuning). I think LoRA vs full fine-tuning is probably the reason leading to the discrepancy, but I cannot be sure. Totally fine if you don't have an answer to this comment. I believe this paper has its value. Novel finding has been discovered and is important for subsequent research on attack dataset construction. I would like to see the authors rebuttal to decide my final rating. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Q1.Virus and some other missing relevant work Thanks for the suggestions. We have added all the work to our manuscript. > Q2.1 Additional experiment on GSM8k > Q2.2 Utility downgrade raises concern on fine-tuning Thanks for the great question. **Fine-tuning on GSM8K** We fine-tune LLaMA-2-7B-Chat on 100 samples selected via random, BA, and our method. The train-test split follows (He et al. 2024). Test accuracy is the percentage of correct answers, and HS is measured on the Hex-Phi dataset with GPT-4o as the judge. Method|HS|Acc -|-|- Un-fintuned|1.18|17.57% Random|2.50|20.45% BA|3.39|20.25% Ours|3.42|20.37% As seen, our method successfully unaligns the model, and fine-tuning slightly improves test accuracy, confirming its downstream benefits. **Why utility downgrades**: Unlike GSM8K, where utility is measured by test accuracy, Table 1 uses MT-Bench to evaluate general generation quality. A possible reason is that after fine-tuning, response lengths tend to be shorter, possibly to mimic the Dolly-style concise responses. Since MT-Bench relies on model-based evaluation, potential verbosity bias in the judge model[1] may contribute to the utility downgrade. Utility downgrade is also observed in Table 8 of [3]: Even fine-tuning on the full benign dataset leads to a slight drop. While scores slightly decrease, the generations are of similar high quality from human inspections. > Q3.Logistical flow for the attack method design Lines 147-161 outline our intuition: for an aligned LLM, clean and safe samples lie comfortably within the safety distribution, while harmful samples become “outlier” samples that lie outside the safety scope. We hypothesize that certain outlier samples within benign datasets may have a high potential to push LLM parameters toward harmful zones during fine-tuning. The following experiments are used to validate this intuition (Fig. 3&4). > Q4.What will happen to a dataset where most samples are short samples The short samples will be detected. We'd like to discuss a potential misunderstanding of the outlier samples. Our outlier detection is to find some outliers that are **away from the model's alignment space**, but not to detect relative outliers in the **given dataset** (see Reviewer RTza Q6). Thus, these short samples are selected, potentially because of their weird length compared to the training dataset of the aligned LLMs. Thus, regardless of the fine-tuning dataset's length distribution, the vanilla Self-Inf score always favors shorter samples. Therefore, the finding is not contradictory. Additionally, while short samples are effective, the revised Self-Inf-N, which normalizes length bias, is more useful in capturing 'harmful benign samples.' We welcome further discussion! > Q5. Advantages of our method to BA. - A novel outlier-based approach to breaking safety alignment. - A significantly more efficient method, requiring only one-third of BA's runtime. - No reliance on an anchor dataset. - More practical applications, e.g, continuous learning. Additionally, our primary goal is not to surpass BA in harmfulness. An average score >3 is sufficient to cause substantial safety degradation. Furthermore, since GPT-4o models serve as the judges, score variations might also exist. > Q6.The method seems to be a direct application of (Pruthi et al., 2020) Thanks. Please refer to Reviewer RTza Q2. > Q7.Naming issues(Malicious Fine-tuning) Thanks. We have revised our manuscript. > Q8.Lisa: Beavertails vs. RepNoise Yes, we use original Beavertails. The table below presents the Lisa defense results with repnoise_beavertail (same settings as Figure 8(a)), where refusal responses are used. The results show that the new Beavertail dataset effectively suppresses LLM harmfulness during fine-tuning. K1/K2 steps|HS -|- 2/18|1.52 6/14 |1.24 10/10|1.17 14/6 |1.02 18/2 |1.01 This experiment made us reconsider our earlier claim that the bi-state optimization strategy has limited effectiveness in suppressing the harmfulness of benign samples. We have now included this experiment and revised our arguments accordingly in the updated manuscript. > Q9.Different observations in Lisa's Table 15. As discussed in Q8, Lisa’s performance improves significantly with repnoise_beavertail. We've updated our statement to reflect these findings, which partially align with Table 15 of Lisa's paper. However, key questions remain: Why does the alignment dataset influence Lisa’s performance? And how can we design better defenses against benign fine-tuning attacks? These might be interesting for future readers. > Q10.Why benign fine-tuning attack fails in Lisa's testbed. Lisa seems to fine-tune directly with BA-selected Alpaca samples, but the sample scores were computed using a LLaMA-Chat model in full fine-tuning mode. While BA's paper shows some transferability, a more direct approach would be replicating the scoring process with a LLaMA-NonChat model in LoRA mode. [1]https://arxiv.org/pdf/2310.10076 --- Rebuttal Comment 1.1: Comment: Hi authors, Thanks for the rebuttal. Most of my concerns are addressed. However, I still have several comments after reading the rebuttal: Q2.1, the results look fine. A regret is that Self-Inf-N basically share the same performance with BA. I read your answer for Q5., and understand the main contribution Self-Inf-N. Q.3 I still think the motivation of the design of Self-Inf-N is weak after reading your answer. Even though the outlier is over the aligned model, i.e., shorter answer can be more easily regarded as an outlier. However, this still can be a coincidence and does not form a good logistical flow. For example, let say the aligned model (i.e., chat model) is aligned by mostly short answers, then understandably the long answers can be the outlier. In this case, the self-inf-N method will select the long answers, which still contradict your own finding "Influence of Samples with Short Token Lengths." in Page 4. It seems that the fundamental issue of the method design is that the method is not based on a logistical thinking (or a basic conjecture) but start with some pure experiment trials. Yes, with coincidence you can get it right, however, you never know how and why the method actually work. With that said, other concerns are sufficiently addressed. I think most other part of the papers are fine. Novel finding without covered by existing literature has been proposed. The new method indeed eliminate the use of anchor dataset in BA, which is good. I will support this paper acceptance, and will actively participate in the reviewer-AC discussion phase. However, due to the remain concern. I insist to keep the borderline score. I guess you have another round to reply. Feel free to challenge me if you think what I said is wrong. **I saw the author's reponse**, and I appreciate the additional clarifications. I think this paper can be accepted as long as the clarification is included in the updated paper. I will actively participate in the reviewer-AC discussion and champion acceptance of this paper. --- Reply to Comment 1.1.1: Comment: Thank you very much for your encouraging words and for recognizing the main contributions of our work. We would like to take this valuable final rebuttal opportunity to further elaborate on the two insightful points you raised. > Q2.1, A regret is that Self-Inf-N basically shares the same performance with BA. We truly appreciate your recognition of the main contribution of Self-Inf-N. We would like to take this chance to highlight two additional points that we believe further demonstrate the uniqueness and practicality of our approach: **No Reliance on Anchor Dataset**: As you noted, our attack method achieves comparable performance to BA without relying on any additional anchor dataset. **Unique Advantages under Practical Scenarios**: Beyond the standard evaluation, Self-Inf-N also exhibits distinct advantages under more realistic settings. One such setting is continuous learning that we mentioned in Sec 4.4. Specifically, fine-tuning on a small set of only 100 selected samples neither provides enough task-specific knowledge nor ensures good performance on the downstream task, which makes it easier to identify as a useless LLM and raises suspicions and potentially leading users to discontinue. Therefore, we aim to investigate whether the attack is still valid in the continuous fine-tuning setting, where the fine-tuning contains two stages (Stage 1: benign fine-tuning with selected 100 samples; Stage 2: Domain-specific Fine-tuning with Asclepius dataset). As we explored in the continuous setting, once the model is fine-tuned with samples found by our method, the harmfulness can persist during the following continual fine-tuning stages. We also evaluate the same setting with BA. However, their harmfulness diminished much faster in the second fine-tuning stage than ours under different learning rates used in the second stage. Specifically, the results (same setting as in Figure 6 (b)) are as follows, |Setting $\downarrow$|HS of BA| HS of Ours| |-|-|-| |w/o first stage|1.31|1.31| |lr=5E-6|2.13|3.39| |lr=8E-6|1.87|3.20| |lr=2E-5|1.62|2.78| Here, HS denotes the average harmfulness score across 11 categories in Hex-PHI. "w/o first stage" indicates the model is fine-tuned solely on the Asclepius QA dataset. These results illustrate that the harmfulness introduced by our method is more enduring than BA’s in a continual fine-tuning setup, further reinforcing its practical impact. > Q.3 The motivation of the design of Self-Inf-N is weak after reading your answer. Thank you for the insightful question. **Supporting Evidence for the Intuition**: Our intuition for selecting outlier samples in the benign dataset is supported by two recent, well-established works. Prior research [1] has shown that harmful samples are explicitly distinct from harmless ones in the models' hidden states, while [2] has demonstrated that fine-tuning LLMs on datasets containing more harmful samples tends to cause greater shifts in model embeddings. Together, these findings partially support our intuition that outlier samples tend to induce greater weight deviations. Therefore, we propose to use Self-Inf as an outlier detection method to pick the outlier samples in the benign dataset, which have greater potential in shifting model weights. We have incorporated a discussion of these two papers into our explanation of the intuition, which we believe will strengthen our argument. **Clarifying the Length Bias Discussion**: The discovery regarding short token lengths is indeed an interesting and unexpected finding. However, we regard it as a secondary insight rather than a core motivation. Our primary goal is to develop a method that reliably induces harmfulness in LLMs. The exploration of length bias (Section 3.3) led us to refine Self-Inf into Self-Inf-N, which addresses this bias and forms our final proposed method. **Response to the Hypothetical Scenario**: Your counterexample—where aligned models are predominantly trained with short answers—is a thoughtful and important one. While most current LLMs are aligned using relatively long, detailed conversations, your scenario points to a meaningful direction for future exploration. We agree that under such circumstances, long answers might be outliers, and the Self-Inf approach might select them. We plan to acknowledge this limitation in the final version of our paper and are grateful for your suggestion. Once again, thank you for your detailed comments and thoughtful suggestions throughout the review process. Your feedback has significantly helped us improve the clarity, rigor, and broader relevance of our work. [1] On prompt-driven safeguarding for large language models. [2] Vaccine: Perturbation-aware alignment for large language model.
Summary: This submission investigates a vulnerability in the fine-tuning stage of large language models (LLMs), demonstrating that benign datasets can be exploited to compromise safety alignment. The authors examine this problem via the lens of outlier identification, using Self-Inf-N to discover and remove outlier samples from benign datasets that may compromise LLM safety during fine-tuning. The principal contributions encompass: (1) Detection of detrimental samples within benign datasets via outlier identification; (2) Creation of Self-Inf-N, a normalized self-influence scoring methodology that mitigates length bias in outlier detection; (3) Empirical findings indicating that fine-tuning on merely 100 outlier samples identified by Self-Inf-N can jeopardize the safety alignment of large language models; (4) Illustration of the attack's transferability across various model architectures and sizes; (5) Evaluation of the attack's efficacy in real-world contexts. Claims And Evidence: The claims made in the submission are supported by evidence, but there are some limitations that undermine the submission's significance. The primary claim that "fine-tuning LLMs on 100 outlier samples selected by Self-Inf-N in the benign datasets severely compromises LLM safety alignment" is demonstrated across several models. The core research problem "Could seemingly harmless, benign samples be exploited to further undermine LLM’s safety alignment?" is developed based on prior work [1]. The authors' contribution of using outlier detection to identify the most problematic samples is incremental. The evidence for the effectiveness of Self-Inf-N over vanilla Self-Inf is convincing, but this technical improvement does not sufficiently advance the field's understanding of LLM safety vulnerabilities. Ref: [1] Fine-tuning aligned language models compromises safety, even when users do not intend to! In ICLR, 2024. Methods And Evaluation Criteria: The proposed method and metrics are suitable for the research issue. The Self-Inf-N method enhances existing gradient-based impact estimating approaches by resolving a significant shortcoming (length bias) seen in the standard Self-Inf approach. The evaluation criteria for safety uses the comprehensive and established benchmark. "GPT-4 as a judge" for assessing harmfulness is a standard metric for evaluating the harmfulness of the responses. The experimental design employs various baselines and conducts ablation studies on many hyperparameters, offering a comprehensive evaluation of the attack's efficacy. Theoretical Claims: The paper does not make extensive theoretical claims or provide formal proofs, as it is primarily an empirical study. Experimental Designs Or Analyses: The experimental designs and analyses in the submission are generally well-designed but have two limitations. The main experiments on fine-tuning LLMs with outlier samples are heavily biased toward Llama-2 models, which are no longer representative of current LLMs. The field has evolved rapidly, with newer models incorporating more sophisticated safety alignment techniques that may be more resistant to the attack described in this paper. The submission also lack experiments with closed-source fine-tuning services. While the authors mention in the introduction that "fine-tuning service providers (e.g., OpenAI Platform) might refuse the request for fine-tuning models on harmful datasets," they do not test whether their benign outlier samples could bypass the safety filters of these commercial platforms. Supplementary Material: The supplementary material provides additional information supporting the submission's claims. The appendix includes: - Detailed descriptions of the evaluation benchmarks - Dataset preparation procedures for Dolly and Alpaca - Justification for the choice of normalization in Self-Inf-N - Extensive experimental results - Examples of filtered samples and harmful generations Relation To Broader Scientific Literature: The core finding of this submission has already been discussed in [1]. While the authors introduce a new method for identifying the most problematic samples (Self-Inf-N), this represents a refinement rather than a fundamental advance in our understanding of LLM vulnerabilities. The authors acknowledge prior work on malicious fine-tuning and benign fine-tuning compromising safety alignment but do not sufficiently differentiate their contribution from these existing works. Ref: [1] Fine-tuning aligned language models compromises safety, even when users do not intend to! In ICLR, 2024. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: N.A. Questions For Authors: 1. Have you tested Self-Inf-N on close-sourced models like GPT-4o? 2. How do you differentiate your contribution from [1]? 3. Given that most commercial LLMs are equipped with extensive safety evaluations, how would the proposed method realistically impact deployed systems? 4. Have you compared Self-Inf-N with other outlier detection approaches beyond gradient-based influence estimation, such as density-based or deep learning approaches for anomaly detection? Ref: [1] Fine-tuning aligned language models compromises safety, even when users do not intend to! In ICLR, 2024. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1.The difference between our paper and [1] [1] raises an important observation: even fine-tuning on benign samples can lead to a certain degree of safety degradation. However, despite this initial investigation, two key questions remain unexplored: - The increase in harmfulness is quite limited and occurs only in certain categories (see Figure 1 in [1]). - Why does safety degradation happen? [1] attributes this phenomenon to "catastrophic forgetting" on the utility-oriented dataset but does not delve into deeper investigations. To address these questions, we take a step further by investigating whether a specific subset of the benign dataset contributes disproportionately to safety degradation. We approach this from a novel outlier detection perspective. We believe that the insights gained from our outlier detection analysis (as outlined in Q2) will enhance the community’s understanding of LLM vulnerabilities. > Q2.Technical improvement does not sufficiently advance the understanding of LLM safety vulnerabilities. **Technical improvement.** While Self-Inf is a well-established method [2,3], we are **the first to adapt it for benign fine-tuning attacks**. Importantly, our proposed Self-Inf-N introduces non-trivial refinements(mitigating length bias), which lead to significantly stronger attacks on LLMs (see Figure 4). The core objective of our paper is to uncover vulnerabilities in LLMs during fine-tuning. We believe our contributions offer meaningful insights for the community: - Short samples can severely degrade safety alignment (Figure 3; Section 3.3). - Self-Inf-N, a refined outlier detection method, improves attack effectiveness (Figure 4). - Strong empirical results show that Self-Inf-N misaligns LLMs across model sizes and architectures, and remains effective in continuous learning and data poisoning scenarios (Figure 4, Table 1, Figure 5). > Q3.The main experiments are biased toward Llama-2 models Thanks for the great question. We also evaluate our method with other more advanced methods like Qwen2-7b-instruct. The following table shows the results. Method|HS| -|-| Random Selection|1.63 BA|3.05 Ours|3.22 It is shown that, our method still successfully leads the fine-tuned Qwen model to a safety degradation problem. We have added the experiments to our updated manuscript. > Q4.Lack experiments with closed-source models. Thank you for the great question. Our method requires gradient access, which is not feasible for closed-source models. However, we evaluate the transferability of samples selected using LLaMA-2-7B-Chat to GPT-3.5 and GPT-4. The results show limited impact, likely due to (1) large architectural differences between the two model series and (2) (potential) stronger safety mechanisms in closed models. We believe this is a valuable direction for future research. > Q5.How does the proposed method impact deployed systems? **Escaping existing safety moderations.** Firstly, our method can successfully escape the moderation tools (Sec 4.6), while the harmful samples would be easily detected. **Benign Fine-tuning breaks safety** Our attack method is effective for the popular fine-tuning-as-a-service setting for the deployed LLM: when the user uploads the selected "seemingly" benign samples, the safety of the fine-tuned LLM would be broken. > Q6.Comparisons with other outlier detection methods. **Results with DBScan** We conducted a simple experiment using DBSCAN, applying it to the last-layer hidden representations to identify 100 samples whose embeddings differ most from the overall distribution. The results on the Dolly dataset are shown in the table below. As observed, DBSCAN demonstrates significantly lower effectiveness compared to our method. Method|HS| -|-| DBScan|1.62 Ours|3.71 **Why do density-based and DL approaches fail?** We believe the key difference lies in the objective: rather than identifying relative outliers **within the dataset**, our goal is to find samples that most strongly **influence the deviation of the aligned model’s parameters**. In other words, we intend to find samples that are most "shocked" (influential) to the aligned model $f_{\theta}$. Density-based and DL approaches, however, are not suitable for our setting as they mainly focus on finding relative outlier samples in the **given dataset**. **Why use (Pruthi et al., 2020)?** We chose the method from Pruthi et al. (2020) due to its proven effectiveness and ease of use in LLMs, as supported by recent work in instruction tuning data selection [2] and outlier analysis [3]. Additionally, compared to other gradient-based influence estimation methods (e.g., Data-inf), it offers significantly better memory efficiency. [1]Benign Samples Matter! Fine-tuning On Outlier Benign Samples Severely Breaks Safety [2]Less: Selecting influential data for targeted instruction tuning [3]Outlier gradient analysis: Efficiently improving deep learning model performance via hessian-free influence functions.
Summary: This paper puts forth the idea that fine-tuning can compromise the safety of large language models. In particular, they leverage existing work from the field of outlier detection to exploit these data points in benign datasets and then demonstrate that fine-tuning on these examples (which are still, by construction, benign) trains a resultant model that is more likely to be harmful or non safety aligned. Claims And Evidence: Both scenarios 1 (continuous learning) and 2 (data poisoning) are well motivated as practical extensions of the proposed methodology. The section on safety mitigation is well received. However, is there a reason why the authors only used API-based detection tools? Many model users also have access to model-based guardrails such as [LlamaGuard](https://arxiv.org/abs/2312.06674), [GraniteGuardian](https://arxiv.org/abs/2412.07724), or [WildGuard](https://arxiv.org/abs/2406.18495). It would have been good to corroborate any findings from the API based detection setups with one or more of these model-based guardrails. Methods And Evaluation Criteria: The harmfulness evaluation has a few issues. 1. The harmfulness of the resultant models is evaluated through a single dataset of 330 samples. This is quite limited, and begs the question why the authors didn't consider other evaluation benchmarks. In particular, this benchmark is rather small and may not capture the full extent of harm benchmarks. Why not consider other commonly used safety benchmarks, such as the [BBQ dataset](https://aclanthology.org/2022.findings-acl.165/) or [ToxicChat](https://arxiv.org/abs/2310.17389) or [XSTest](https://arxiv.org/abs/2308.01263). Note that the last dataset, XSTest, might be particularly useful for this instance given its motivation. 2. The harmfulness evaluation comes from an auxiliary model (namely, GPT-4). The authors do not provide the specific prompt that is used here, so it is hard to judge. Additionally, there is no corroboration of the judge model's effectiveness - either through a small human validation or comparing to other state of the art harm detection models such as [LlamaGuard](https://arxiv.org/abs/2312.06674), [GraniteGuardian](https://arxiv.org/abs/2412.07724), or [WildGuard](https://arxiv.org/abs/2406.18495) Also small note that the main text says GPT-4 is used, but Appendix Section A ays GPT-4o is used. Theoretical Claims: The theoretical grounding in influence functions via data influence estimation is well presented. Additional background regarding outlier detection is also provided. Experimental Designs Or Analyses: The experimental design makes sense overall. Supplementary Material: I briefly skimmed through Section A on the eEvaluation Benchmark. Relation To Broader Scientific Literature: The authors do a good job of relating their current setup to all of the previous work on malicious fine-tuning and how it may compromise the safety alignment of LLMs. Specifically, they do well to provide links to all of these similar works and also continue to reference them throughout the paper. Essential References Not Discussed: N/A Other Strengths And Weaknesses: A strength of the paper is the breadth of models considered in the experiments, with most if not all of them being fully open source which is a big positive. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1.Why using API-based detection tools? Thank you for the insightful question. **We totally agree that the suggested evaluation model can further enhance the robustness of our analysis, hence we have already included it in our updated results.** Method|LlamaGuard|GraniteGuard|WildGuard -|-|-|- Harmful Dataset|9, 91|6, 94|5, 95 Ours|100, 0| 100, 0|100, 0 Each cell shows two values: the number of detected safe samples and unsafe samples, respectively. Our method clearly outperforms the harmful dataset. We also respectfully argue that API-based and model-based detection tools are fundamentally similar, as API-based tools also rely on **underlying models** trained on large, crowd-labeled datasets for general-purpose safety detection (similar to how ChatGPT operates). We chose API-based tools due to their widespread adoption in both academic research[6] and commercial applications[1]. > Q2.Why not considering other safety benchmarks, e.g., BBQ, ToxicChat, and XSTest. **We fully agree that including additional benchmarks leads to a more comprehensive evaluation. Hence we have incorporated both the BBQ and XSTest datasets to further evaluate our method on LLaMA-2-7B-Chat**. For XSTest, we followed the original paper's setup and used GPT-4 to classify the model’s responses to unsafe queries into three categories: full_compliance, full_refusal, and partial_refusal. Method|full_compliance|full_refusal|partial_refusal -|-|-|- w/o finetuned|14|182|4 BA|130|63|7 Ours|146|50|4| For BBQ, we computed the bias scores on both ambiguous and disambiguated subsets (higher scores indicate more bias): Method|Bias Score (ambig)|Bias Score (disambig) -|-|-| w/o finetuned|18.66|39.17 BA|47.50|46.51 Ours|46.95|46.47 As seen, LLM fined-tuned with our method exhibits a more compliance to the harmful query prompts in XSTest, and a more biased outputs in BBQ, compared to BA and un-finetuned version. **The limitations of ToxiChat, BBQ, and XSTest** - ToxiChat primarily evaluates toxicity detection tools (e.g., OpenAI Moderation API), which falls outside the scope of our work. - BBQ focuses on bias in models' generations, which is **just one dimension** of safety. - XSTest is indeed a good benchmark, which contains ten safety categories, , but mainly used to study over-refusal behaviors. Compared to HEx-PHI, it has fewer unsafe prompts (200) and covers safety categories less comprehensively. **Why we use Hex-PHI?** Hex-PHI is a well-established benchmark for evaluating LLM's safety. It is grounded in the 'exhaustive lists of prohibited use cases found in **Meta’s Llama-2 usage policy and OpenAI’s usage policy**.'[2], covering a diverse range of safety categories such as **illegal activity, privacy violations, and child exploitation**. It has also been adopted in several recent works [3,4,5], making it a strong choice for evaluating LLM safety in our context. > Q3.Prompt for the judge model As noted in Appendix A, 'The prompt provided to the judge model is identical to that used in [2]'. Please refer to [2] for more details. > Q4.Corroboration of the judge model's effectiveness Thank you for the thoughtful question. **Following your suggestion, we further evaluate the generated 330 responses on Hex-PhI using additional guard models: LlamaGuard, GraniteGuard, and WildGuard.** Method|LlamaGuard|GraniteGuard|WildGuard -|-|-|- w/o Finetuned|321 safe+9 Unsafe |324 Safe+6 Unsafe|328 safe+2 Unsafe BA|103 safe+227 Unsafe|63 Safe+267 Unsafe|105 Safe+225 Unsafe Ours|100 safe+230 unsafe|70 safe+260 unsafe|104 safe+226 Unsafe These results lead to the same conclusion that our method leads to significantly more unsafe responses. However, through manual inspection, we identified two limitations of these tools: (1) High false negative rate – many harmful outputs are incorrectly labeled as safe. (2) Limited granularity – guard models are trained on predefined categories and may not detect nuanced harms (e.g., economic harm in HEx-PHI is hard to be detected). Therefore, we believe LLM-as-a-judge remains a more reliable evaluation method for our setting. Using LLM-as-a-judge is a widely accepted practice for safety evaluation, as adopted by several well-established works [2,3]. (Reviewer UXrg and RTza also noted, GPT-4 as a judge for assessing harmfulness is a standard metric). > Q5.GPT-4 vs. GPT-4o Thanks for your detailed review. That was indeed a typo, we used GPT-4o in our experiments. [1]https://medium.com/jigsaw/reducing-toxicity-in-large-language-models-with-perspective-api-c31c39b7a4d7 [2]Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! (ICLR 2024) [3]Safety alignment should be made more than just a few tokens deep. (ICLR 2025) [4]Artprompt: Ascii art-based jailbreak attacks against aligned llms. (ACL 2024) [5]SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding. (ACL 2024) [6]Bias and fairness in large language models: A survey
Summary: This paper investigates a vulnerability in the fine-tuning stage of LLMs, where even benign datasets can lead to a significant increase in the harmfulness of LLM outputs. The authors propose a novel attack method, Self-Inf-N, which identifies and selects outlier samples from benign datasets to fine-tune LLMs, thereby compromising their safety alignment. Specifically, the proposed attack exhibits high transferability across different architectures and remains effective in practical scenarios. Furthermore, most existing mitigation strategies fail to defend against this attack, highlighting the need for more robust alignment safeguards. The experiments demonstrate the effectiveness and transferability of the attack. Claims And Evidence: The claims made in the paper are generally supported by clear and convincing evidence. However, there are some points could be further clarified: - The paper discusses practical scenarios like continuous learning and data poisoning, but the experiments are somewhat limited in scope. - The authors suggest that benign samples could avoid toxicity detection which is trivial as the data are benign samples but the toxicity detection tools are designed for harmful training data. Is there any design suggestion in mitigating safety issue during fine-tuning? Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate. Theoretical Claims: The paper does not present any theoretical proofs. Experimental Designs Or Analyses: The experimental designs and analyses are generally sound. Supplementary Material: The ablation studies in the appendix are good; I didn't review the code carefully, but it looks good at first glance. Relation To Broader Scientific Literature: This paper could contribute to making LLM safe and mitigate fine-tuning vulnerabilities. Essential References Not Discussed: Although using a different approach, there are some concurrent works that could be discussed in a future revision: - Hsiung et al. "Your Task May Vary: A Systematic Understanding of Alignment and Safety Degradation when Fine-tuning LLMs" (2024) - Mu et al. "Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring" (NAACL 2025) Other Strengths And Weaknesses: **Strengths** - This paper addresses a critical and underexplored vulnerability in LLM fine-tuning. - The proposed Self-Inf-N method is novel and effectively mitigates the length bias in the original Self-Inf method. - The extensive experiments demonstrate the effectiveness and transferability of the attack across multiple LLMs. **Weaknesses** - This paper did not discuss any corresponding defense mechanism in addressing the potential threats. Other Comments Or Suggestions: none Questions For Authors: Please refer to the weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1. The paper discusses practical scenarios like continuous learning and data poisoning, but the experiments are somewhat limited in scope. Thank you for the insightful question. The continuous learning and data poisoning settings represent our preliminary exploration into how the proposed method might realistically threaten LLM alignment in practice. In particular, we incorporated practical considerations such as a real-world medical downstream task (Asclepius) and experiments with low poisoning ratios (see Section 4.4.2). While we acknowledge that our current work cannot exhaustively cover all practical scenarios, we hope these initial investigations can serve as a foundation for future work in this important area. > Q2. The authors suggest that benign samples could avoid toxicity detection which is trivial as the data are benign samples but the toxicity detection tools are designed for harmful training data. Is there any design suggestion in mitigating safety issue during fine-tuning? Thank you for raising this important point. **Any Mitigation Strategies During Fine-tuning?** We have explored **additional mitigation strategies** in Section 4.6, including both *data augmentation* and *fine-tuning-stage defense* methods. Specifically, we evaluate whether incorporating extra alignment samples can mitigate harmfulness, and we also test advanced fine-tuning defenses such as LISA [3]. Empirical results show that augmenting the fine-tuning dataset with safety samples from the Bianchi dataset would effectively suppress the harmfulness. **Why do we incorporate toxicity detection?** We agree that it is expected for benign samples to bypass toxicity detectors, and we include this analysis primarily to highlight **the stealthiness of such attacks in practice**. > Q3. Although using a different approach, there are some concurrent works that could be discussed in a future revision: Thanks for these suggestions. We have included the suggested papers in our updated manuscript! > Q4. This paper did not discuss any corresponding defense mechanism in addressing the potential threats. In Section 4.6, we identify a promising mitigation strategy: augmenting the fine-tuning dataset with a subset of safety-aligned data from Bianchi et al., which appears to reduce the harmfulness of LLM outputs. We also discuss the limitations of existing defense methods and emphasize the need for more advanced techniques specifically designed to address benign-sample-based alignment attacks. We hope this work can help motivate further research in this critical direction. [1] Hsiung et al. "Your Task May Vary: A Systematic Understanding of Alignment and Safety Degradation when Fine-tuning LLMs" (2024) [2] Mu et al. "Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring" (NAACL 2025) [3] Huang T, Hu S, Ilhan F, et al. Lisa: Lazy safety alignment for large language models against harmful fine-tuning attack[J]. Advances in Neural Information Processing Systems, 2024, 37: 104521-104555.
null
null
null
null
null
null
Orthogonal Subspace Decomposition for Generalizable AI-Generated Image Detection
Accept (oral)
Summary: In this paper, the authors explain the issue of lacking generalization in deepfake detection from the perspective of SVD. Specifically, they argue that due to the limited nature of fake features, models trained on deepfake datasets tend to produce low-rank matrices, which leads to a failure in capturing key components necessary for distinguishing real images and unseen samples. To address this issue, they propose freezing most of the parameters in a pre-trained model—preserving its ability to detect real images—while only adjusting a subset of parameters to adapt to forgery patterns. This parameter decomposition is achieved through SVD decomposition. Claims And Evidence: The proof that explains generalization loss using the rank of the SVD decomposition is complete. Methods And Evaluation Criteria: Yes, the authors adopted standard datasets and evaluation metrics commonly used in deepfake detection. Theoretical Claims: The reviewer examined the authors’ explanation of the “asymmetry phenomenon” and found it convincing. Experimental Designs Or Analyses: [1] The experimental setting of cross-dataset and cross-method evaluation is feasible, and the chosen baselines are appropriate. [2] The ablation study examines the impact of individual components, the effects of switching the backbone, and comparisons with other PEFT methods, making the experimental design relatively comprehensive. Supplementary Material: The reviewer has checked the expansion of the proof and supplementation of the experimental section, and find them convincing. Relation To Broader Scientific Literature: Extensive research has been conducted on the generalization issue in deepfake detection, with numerous studies addressing its existence and potential causes—including the distribution differences between real and fake features, as discussed in this paper. However, this paper provides a novel perspective by further analyzing the problem through the lens of SVD. In particular, the idea can be useful in addressing challenges such as analyzing overfitting problems, examining the expressivity of feature spaces, and preserving pre-trained knowledge while adapting to downstream tasks. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The reviewer does not have major concerns, some minor weaknesses are listed below: [1] Many details in the analysis are missing. For instance, which testing datasets are used to generate the visualizations of Figure 5 and Figure 6 [2] The implementation and experimental details of GenImage are missing. It is not specified whether the authors used the same settings as those in the UniversalFakeDetect benchmark. [3] In Figure 6, the number of main principal components is heavily influenced by the testing dataset. If the dataset changes, the values may differ. The authors should provide additional visualizations for different testing datasets. For instance, face deepfakes might be less diverse than general AI-generated images, leading to inconsistent results. [4] The average results in Table 4 don’t align with those mentioned in the text. [5] How is the retained rank r determined? Would adjusting r for different datasets lead to better or different results? The authors also have not provided a clear explanation for why n-r=1 achieves the best results. [6] The font size in Figure 6 is too small, particularly in the legend. Other Comments Or Suggestions: The reviewer has no further comments on the paper. However, the reviewer would like to offer one additional suggestion on writing: the concept of the “asymmetry phenomenon in AI-generated image detection” seems like a rebranding of the generalization issue and comes across as somewhat overly elaborate. This does not affect the reviewers’ assessment of the paper’s quality. However, it would be ideal if this section can be condensed a bit. Questions For Authors: Please see weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer SVkD for the constructive comments, insightful questions, and useful suggestions.** We address the reviewer's concerns below. > **Q1.** The authors should provide additional visualizations for different testing datasets. For instance, face deepfakes might be less diverse than general AI-generated images, leading to inconsistent results. **R1.** Thanks for your valuable suggestion. - Indeed, the number of principal components (effective rank) varies across datasets because our method measures the "effective dimension" of the feature space, which **depends on the input data distribution**. - To validate this, we analyzed the effective rank on multiple test datasets, including **CDF-v2 (face-focused)** and **UniversalFakeDetect (general-content)**. Key observations from **Table 5** are: - **General datasets exhibit a higher effective rank than face-specific datasets.** This is because general datasets contain diverse objects, leading to a more complex feature space. - **Our SVD-based method outperforms LoRA and full fine-tuning (FFT)** in capturing a higher-rank feature space, preserving more discriminative information. ***Table 5: Evaluation results on different testing datasets.*** Tuning Methods | Train Datasets | Test Datasets | Effective Rank | Mean Accuracy (%) | -|-|-|-|-| | svd | FF++ (**face**) | CDF-v2 (**face**) | 159 | 95.60 | | lora | FF++ (**face**) | CDF-v2 (**face**) | 137 | 89.40 | | fft | FF++ (**face**) | CDF-v2 (**face**) | 57 | 85.70 | | svd | UniversalFakeDetect (**general**) | UniversalFakeDetect (**general**) | 316 | 95.19 | | lora | UniversalFakeDetect (**general**) | UniversalFakeDetect (**general**) | 304 | 93.03 | | fft | UniversalFakeDetect (**general**) | UniversalFakeDetect (**general**) | 238 | 86.22 | --- > **Q2.** The authors have not provided a clear explanation for why n-r=1 achieves the best results. **R2.** Thanks for the interesting questions. Our choice of using a lower rank (specifically, n-r=1) for fine-tuning DFD is primarily motivated by two critical factors: - First, **the nature of the real-fake classification task itself makes it relatively straightforward.** Specifically, fake samples in existing training sets tend to exhibit a limited number of distinctive forgery patterns (FF++ contains only four forgery types), each with relatively simple and consistent characteristics. - Due to this simplicity and limited diversity, a low-rank adaptation with a small rank (e.g., "n-r"=1, 4, or 16) is sufficient for the model to effectively learn these forgery patterns. As demonstrated by Table 5 in our paper, choosing ranks of 1, 4, or 16 yields very similar performance results. Given this observation, **we prioritize efficiency and parameter economy, making rank 1 the optimal choice.** - Second, **the inherent characteristics of binary classification further justify selecting a smaller rank.** *Binary classification tasks typically do not require the model to learn extensive and nuanced patterns*, but rather to identify just enough distinctive features to separate the two classes, making the learned feature space inherently constrained. - To illustrate, when distinguishing between cats and dogs, the classifier might achieve good accuracy simply by examining the tails, without needing detailed knowledge about specific breeds such as Corgis or Shibas. Thus, **binary classification inherently simplifies the complexity of the learning problem**, meaning that employing a higher rank would not provide significant additional benefit. - In contrast, more complex tasks like multi-class classification (as discussed in our extended framework proposed in **our response R1 of the Reviewer cn1X**) necessitate a higher rank (the value of "n-r"). Indeed, our experiments indicate that **increasing the value of "n-r" in multi-class scenarios leads to improved performance**, highlighting the relationship between task complexity and optimal rank selection. ***Table 6: Optimal rank selection based on the task complexity. We evaluate all the models on the Chameleon dataset.*** | binary (n-r=1) | binary (n-r=16) | binary (n-r=32) | binary (n-r=64) | multi-task (n-r=1) | multi-task (n-r=16) | multi-task (n-r=32) | multi-task (n-r=64) |-|-|-|-|-|-|-|-| | 70.27 | 69.34 | 69.08 | 68.87 | 72.62 | 72.89 | 73.76 | 74.18 --- > **Q3.** Many details in the analysis are missing, such as Figure 5 and Figure 6. **R3.** Thanks for the kind mention. All analysis figures, including Figure 5 and Figure 6, are tested on FF_c23 (test), which is commonly used as the within-domain testing dataset. We will clarify it in the revision. --- > **Q4.** The implementation and experimental details of GenImage are missing. **R4.** Thanks for your kind mention. We will add the implementation details and clarify them clearly in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The authors have addressed the additional experiments and the setting of n-r thoroughly, which are key concerns of mine. Therefore, I am considering increasing my score. I also recommend that the authors carefully revise the camera-ready version in accordance with the clarifications provided in their rebuttal.
Summary: This paper proposes Effort, a novel SVD-based adapter tuning method, for generalizable AI-generated image detection. The key idea is constructing two orthogonal subspaces, where the principal components preserve the pre-trained knowledge from the vision foundation models while the residual components are utilized to learn new domain-specific forgery patterns for detection. The paper also leverages PCA to quantify the effective rank of the learned feature space, explaining the failure of conventional detectors. Claims And Evidence: The authors have provided appropriate references, clear visualizations, and thorough analysis, making their work technically sound and well-justified. Methods And Evaluation Criteria: The used evaluation methods and criteria are suitable and common in the related fields. The used benchmarks are commonly used in existing works. Theoretical Claims: Although the authors provide a theoretical explanation to the asymmetric phenomena, I believe this theorem does not significantly contribute to the overall paper. Instead, I suggest moving this content to the supplementary material and replacing it with Algorithm 1, which is much more helpful in understanding the core concepts of the paper. Experimental Designs Or Analyses: This paper has conducted a thorough evaluations on both deepfake and AIGC detection benchmarks, and most analysis experiments are insightful and carefully designed. However, I noticed in Table 6 that lower values of r yield better generalization results. This observation seems counterintuitive, and I believe the authors should provide a clear and reasonable explanation for this phenomenon. Supplementary Material: I don’t find any obvious issue in the supplementary. Relation To Broader Scientific Literature: I believe the proposed method in this paper is quite general and may not be limited strictly to AIGC detection. I encourage the authors to provide a detailed discussion on how their approach could be applied to other related fields, which would further highlight its versatility and broader impact. Essential References Not Discussed: Most related and essential works are properly cited in this paper. Other Strengths And Weaknesses: Strengths: - This paper provides an in-depth and reasonable analysis for the failure reason of existing detectors. The proposed analysis approach is insightful and new to me. - This paper proposes a novel SVD-based method that can explicitly ensure the orthogonality between pretrained knowledge and deepfake-specific knowledge. - This paper conducts comprehensive evaluations on both deepfake detection and AIGC detection benchmarks, achieving high generalization performance over existing methods. Weaknesses: - The two proposed constrained losses seem not very necessary, as the improvement shown in the ablation study is limited. Instead, why not make a constrain to the effective rank, as the rank is an important concept in the paper? - The paper does not provide an intuitive visualization of residual and principal components using PCA? It would provide some new insights and findings. - Some important implementation details of fine-tuning and evaluations are missing in the paper. For instance, how many layers are fine-tuned? What kinds of data augmentations are used in this paper? Other Comments Or Suggestions: Overall, I think the quality of the paper is high, with insightful analysis and new effective method proposed. My comments and suggestions are summarized below. - Moving Algorithm 1 to the main body of the paper would greatly enhance understanding, as it provides a clear and practical insight into the key methodology. - The observation in Table 6 that lower-rank SVD achieves better generalization results is intriguing. The authors should provide a detailed explanation for this phenomenon to clarify its implications. - The analysis of Figure 6 is particularly interesting and insightful. I encourage the authors to include more visualizations of existing detectors, as this could serve as a critical analytical tool for the entire field. It effectively reveals the "discrimination dimension" in deepfake detection, which is a valuable contribution. - To provide a more intuitive understanding, I suggest the authors include visualizations of the principal and residual components. For example, what specific regions or features do the residual components focus on? This would help readers better grasp the underlying mechanisms of the proposed approach. - Adding more implementation details into the main paper or supplementary, as it helps the readers better understand some technical details within the method. Questions For Authors: See the weakness Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer j2Rb for the constructive comments, insightful questions, and useful suggestions.** We greatly appreciate and are encouraged by the reviewer's recognition of our insightful and in-depth analysis, methodological novelty, and comprehensive experiments with high generalization performance. Additionally, the reviewer raised several important concerns and questions, which we address in detail below. > **Q1.** The paper does not provide an intuitive visualization of residual and principal components using PCA. It would provide some new insights and findings. **R1.** We genuinely appreciate the suggestion. **We perform the visualization of the attention maps before and after our proposed orthogonal training** on the UniversalFakeDetect dataset. Specifically, for each block of the vision transformer, we visualize the attention map, which represents the attention coefficient matrix calculated between the cls token and the patch tokens. - Specifically, the attention map is computed across the multiple heads on average and is presented alongside the principal weights, residual weights, and total weights, respectively. - We have three discoveries in general: - 1) The attention map of principal weights is almost identical to the attention map of the total weights for each block; - 2) Before and after training, the attention maps of the residual weights in the earlier blocks do not respond (e.g., for ViT-L of CLIP model, the first 22 blocks do not respond); - 3) Only the attention maps in the last blocks of the residual weights respond, which means they contain the real/fake discriminating information (e.g., for ViT-L of CLIP model, the last 2 blocks respond). Following the reviewer's suggestion, we will add these visualization results and the corresponding analysis into our revision. --- > **Q2.** Some important implementation details of fine-tuning and evaluations are missing in the paper. For instance, how many layers are fine-tuned? What kinds of data augmentations are used in this paper? **R2.** Thanks for the suggestion. We list the implementation details below. - Fine-tuned layers: By default, we fine-tune all query (Q), key (K), value (V), and output (Out) components across every layer of the given architecture, such as CLIP. - Data augmentation strategy: We adopt standard data augmentation techniques commonly used in each specific benchmark. For instance, we follow [1, 2] for deepfake detection, [3] for the UniversalFakeDetect benchmark, and [4] for the GenImage benchmark. Following the reviewer's suggestion, we will present a detailed description in the revision. [1] LSDA. CVPR 2024. [2] DeepfakeBench. NeurIPS 2023. [3] UnivFD. CVPR 2023. [4] NPR. CVPR 2024. --- > **Q3.** The observation in Table 6 that lower-rank SVD achieves better generalization results is intriguing. The authors should provide a detailed explanation for this. **R3.** We genuinely appreciate the suggestion. Our choice of using a lower rank (specifically, "n-r"=1) for fine-tuning DFD is primarily motivated by two critical factors: - First, **the nature of the real-fake classification task itself makes it relatively straightforward.** Specifically, fake samples in existing training sets tend to exhibit a limited number of distinctive forgery patterns (FF++ contains only four forgery types), each with relatively simple and consistent characteristics. - Due to this simplicity and limited diversity, a low-rank adaptation with a small rank (e.g., "n-r"=1, 4, or 16) is sufficient for the model to effectively learn these forgery patterns. As demonstrated by Table 5 in our paper, choosing ranks of 1, 4, or 16 yields very similar performance results. Given this observation, **we prioritize efficiency and parameter economy, making rank 1 the optimal choice.** - Second, **the inherent characteristics of binary classification further justify selecting a smaller rank.** *Binary classification tasks typically do not require the model to learn extensive and nuanced patterns*, but rather to identify just enough distinctive features to separate the two classes, making the learned feature space inherently constrained. Thus, binary classification inherently simplifies the complexity of the learning problem, meaning that employing a higher rank would not provide significant additional benefit. - In contrast, more complex tasks like multi-class classification (as discussed in **our response R1 of the Reviewer cn1X**) necessitate a higher rank (value of "n-r"). Indeed, our experimental results indicate that increasing the value of "n-r" in multi-class scenarios leads to improved performance, highlighting the relationship between task complexity and optimal rank selection. --- > **Q4.** The improvement of the two proposed constrained losses shown in the ablation study is limited. **R4.** Thanks for your comment. Please refer to our response of **R4 of the reviewer 4w2k**. --- Rebuttal Comment 1.1: Comment: Thanks for the author's detailed response. My concerns are all well solved. I remain my score and recommend to accept this paper.
Summary: This paper investigates the failure of generalization in AI-generated image detection, identifying an asymmetry phenomenon where detectors overfit to limited fake patterns, resulting in a low-rank and constrained feature space. To mitigate this, the authors leverage the vision foundation models and propose a novel SVD-based tuning approach that freezes the principal components while adapting the remaining components, explicitly ensuring orthogonality to maintain a higher feature rank, thereby alleviating the overfitting problem. Claims And Evidence: The claims in this paper are well-supported by evidence and detailed explanations. Specifically, Figure 1 validates the existence of the asymmetry phenomenon and shortcut overfitting in AIGI detection. Figure 2 illustrates that the constrained feature space is predominantly influenced by fake data. Figures 3 and 5 further confirm that a baseline model trained naively on the AIGI dataset tends to be highly low-ranked, whereas the proposed approach effectively preserves most of the pre-trained knowledge. Methods And Evaluation Criteria: The evaluation criteria and chosen benchmarks/datasets are appropriate for the problem. The paper demonstrates the effectiveness of the proposed method, achieving SOTA performance on both deepfake and AIGC detection benchmarks (Tables 1, 2, and 3). Also, the evaluation protocols are consistent with those used in many existing studies, ensuring a fair comparison. Theoretical Claims: The proposed theoretical claims are presented in Theorem 3.2, where the authors define the covariance similarity between real and fake images and prove that this similarity has a lower bound under certain assumptions. I did not identify any obvious errors or issues in the validity of the proof. Experimental Designs Or Analyses: Overall, the experimental designs are suitable. However, I have identified the following issues: (1) The authors did not conduct an ablation study on the weights of the orthogonality loss and singular value loss, so it is unclear how the weights are allocated in the loss function; (2) The authors primarily use CNNs as baselines. Why not consider ViT-based baselines, such as ViT models trained on ImageNet-1K? (3) The main tables (Tables 1, 2, and 3) lack bold or underlined results, which affects readability. Supplementary Material: I have reviewed all contents in the supplementary. I have identified the issues below: (1) I believe Algorithm 1 is critical for the reader to understand the workflow of the proposed approach. I highly recommend that the authors put this into their main paper, not just as supplementary. (2) For results in the GenImage benchmark, the authors don’t provide the implementation details of how they implement and obtain these results. Relation To Broader Scientific Literature: This paper introduces the concept of effective rank to quantify the overfitting problem in AIGI detection, which I find interesting and relevant to key concepts in other fields. Notably, effective rank is widely used in continual learning [1] and signal processing [2], where it mainly helps assess how much knowledge a model retains for given tasks. In this paper, the authors cleverly use this concept to compute the “dimensionality” of the feature space, which makes sense to me. [1] Loss of plasticity in deep continual learning. Nature (2024). [2] THE EFFECTIVE RANK: A MEASURE OF EFFECTIVE DIMENSIONALITY. European Association For Signal Processing (2007). Essential References Not Discussed: I hope the authors can cite references [1] and [2] and include a brief discussion on their relevance. Other Strengths And Weaknesses: I have listed the strengths and weakness in my above comments. So, not additional comment here. Other Comments Or Suggestions: - The authors did not conduct an ablation study on the weights of the orthogonality loss and singular value loss. How are these weights allocated in the loss function? - The baselines primarily consist of CNN models. Why were ViT-based baselines, such as ViT models trained on ImageNet-1K, not considered? - How can the authors implement their method in GenImage benchmark? It seems like the authors don’t provide any implementation details for that. - Also, can the detection method be applied to detect more complex and advanced fakes, such as the talking-head generation contents? - Furthermore, can this method be applied to broader fields beyond deepfake and AIGC detection? How does it perform in areas such as domain generalization and anomaly detection? I hope the authors can further clarify the broader applicability of their model. I suggest and hope the authors can address my above concerns one-by-one carefully. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer 4w2k for the constructive comments, insightful questions, and useful suggestions.** We greatly appreciate and are encouraged by the reviewer's recognition of our motivation with sufficient and reasonable evidence, methodological novelty, and interesting analysis method. Additionally, the reviewer raised several important concerns and questions, which we address in detail below. > **Q1.** What about the performance of ViT models trained on ImageNet-1K and other ViT-based VFMs? **R1.** Thank you for raising this. Following the reviewer's suggestion, we conduct additional experiments comparing ViT models pre-trained on ImageNet-1K and SigLIP using different adaptation methods. Results are summarized clearly in **Table 2** below: ***Table 2: Results of other ViT-based baseline models.*** | Pre-trained Model | Tuning Methods | Mean Accuracy (%) | |--------|---------|--------| | SigLIP | svd | 90.46 | | SigLIP | lora | 83.42 | | SigLIP | fft | 81.23 | | CLIP | svd | 95.19 | | CLIP | lora | 93.03 | | CLIP | fft | 86.22 | | DINOv2 | svd | 85.46 | | DINOv2 | lora | 83.42 | | DINOv2 | fft | 79.52 | | ViT-ImageNet-1K | svd | 72.77 | | ViT-ImageNet-1K | lora | 70.47 | | ViT-ImageNet-1K | fft | 68.41 | From these results, we observe that our SVD-based adaptation method consistently achieves the best performance over the LoRA-based adaptation and full fine-tuning (FFT) across different pre-training models. --- > **Q2.** How can the authors implement their method in GenImage benchmark? **R2.** We sincerely appreciate the kind comment. Our experiments primarily follow the implementation details and experimental settings described in recent SOTAs [1, 2]. This alignment ensures consistency and transparency, making our results directly comparable with existing studies. [1] NPR. CVPR 2024. [2] FatFormer. CVPR 2024 --- > **Q3.** Also, can the detection method be applied to detect more complex and advanced fakes, such as the talking-head generation contents? **R3.** Thank you for your question. Following your suggestion, we have extended our evaluation to include highly realistic talking-head deepfake content, i.e., HeyGen, selected from the DF40 dataset. To ensure a fair comparison, we add four SOTA detection methods under identical experimental conditions for a fair comparison. The results, summarized clearly in Table x below, demonstrate that our method achieves superior performance compared to the latest SOTA detectors, even on advanced commercial talking-head manipulations. ***Table 3: Evaluation results on the advanced talking-head generation fake contents.*** | LSDA (CVPR'24) | ProDet (NIPS'24) | FSFM (CVPR'25) | UDD (AAAI'25) | Ours |---------|---------|--------|---------|-------| | 46.7 | 41.0 | 70.8 | 75.4 | **79.7** | From the results above, we find that **some detectors trained on the *face-swapping data* fail to generalize the *talking head contents***. However, most previous works are conducted using only the face-swapping deepfake data. Inspired by the reviewer's comment, we plan to add this additional evaluation to the revision and further enlarge our evaluation in the future. --- > **Q4.** The authors did not conduct an ablation study on the weights of the orthogonality loss and singular value loss. How are these weights allocated in the loss function? **R4.** Thanks for the comment. In our experiments, **we set the weights for both the orthogonality loss and singular value loss terms to 1.0 by default**. To further explore the influence of these loss terms, we adjust each hyper-parameter across a wide range. As shown in **Table 4** below, **varying these parameters results in only slight fluctuations in performance,** indicating the stability of our method. Importantly, the inclusion of these loss terms consistently improves performance compared to models without them. ***Table 4: Ablation studies regarding different weights of the loss terms.*** | Orthogonality Loss Weight | Singular Value Loss Weight | AUC on SimSwap (%) | |-|-|-| | 0.0 (No loss, SVD only) | 0.0 (No loss, SVD only) | 94.0 | | 1.0 (Default) | 1.0 (Default) | 95.6 | | 0.5 | 1.0 | 95.5 | | 1.0 | 0.5 | 95.0 | | 0.5 | 0.5 | 94.5 | | 2.0 | 2.0 | 95.1 | | 0.1 | 0.1 | 94.4 | --- > **Q5.** I hope the authors can cite references [1] and [2] and include a brief discussion on their relevance. **R5.** Thanks for the valuable suggestion. Both [1] and [2] introduce the concept of effective rank to quantitatively measure the dimensionality of the feature space, which is similar to our case. We will provide a detailed discussion in our revision. Thank you again. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses to my questions. The answers satisfactorily addressed all of my concerns. Therefore, I will maintain my initial rating.
Summary: The paper proposes a novel approach for detecting AI-generated images (AIGI), particularly deepfake and synthetic images. It highlights that existing detectors suffer from poor generalization ability when encountering unseen forgery methods, primarily due to overfitting to forgery patterns in the training set, resulting in a constrained and low-rank feature space. To address this issue, the authors introduce a singular value decomposition (SVD)-based method that decomposes the feature space into two orthogonal subspaces: one for retaining pre-trained knowledge and the other for learning forgery-related patterns. This approach demonstrates superior generalization performance across multiple benchmark evaluations. ## update after rebuttal. I thank the authors for carefully preparing the comments, my concerns have been addressed during the rebuttal, I would therefore be willing to raise my score. Claims And Evidence: The paper’s primary claim is that orthogonal subspace decomposition enhances the generalization ability of AI-generated image (AIGI) detection. This claim is well-supported by experimental results, particularly in cross-dataset and cross-forgery evaluations, where the proposed method significantly outperforms state-of-the-art approaches. The results demonstrate that the method effectively preserves pre-trained knowledge while simultaneously learning forgery patterns, leading to improved generalization performance in synthetic image detection. Methods And Evaluation Criteria: The proposed method is based on SVD decomposition, where the principal components are frozen, and the remaining components are adjusted to preserve pre-trained knowledge while learning forgery patterns. This approach is theoretically sound and has been empirically validated through experiments. The evaluation metrics include cross-dataset and cross-forgery tests, using AUC and accuracy as key indicators. These evaluation criteria are reasonable and align with existing research in AIGI detection. Theoretical Claims: The paper presents a theoretical analysis explaining the asymmetry phenomenon in AIGC detection and demonstrates, through covariance spectrum analysis, the inevitable failure of symmetric classifiers in this task. The theoretical analysis is well-grounded and aligns with the experimental results. Experimental Designs Or Analyses: The experimental design is well-structured, covering multiple datasets and forgery methods, effectively validating the generalization ability of the proposed approach. The authors also conducted ablation studies, confirming the effectiveness of the SVD-based method and loss constraints. Experimental results indicate that the SVD method is the primary driver of performance improvement, while orthogonality constraints and singular value constraints further optimize generalization performance. Supplementary Material: The paper provides supplementary materials, including additional experimental results and ablation studies, further validating the effectiveness and robustness of the proposed method. The supplementary materials also include a detailed description of the algorithm and theoretical proofs, enhancing the transparency and rigor of the study. Relation To Broader Scientific Literature: The paper is closely related to the existing literature on AIGI detection, particularly studies addressing generalization challenges. The authors cite a wide range of relevant works and highlight the limitations of existing methods. Building upon prior research, the proposed approach introduces an innovative solution by leveraging orthogonal subspace decomposition to enhance generalization performance. Essential References Not Discussed: "A Sanity Check for AI-Generated Image Detection" is a high-quality dataset that could serve as an additional benchmark for evaluating the proposed method. Testing on this dataset would further validate the generalization ability of the approach and provide a more comprehensive comparison with existing methods. Other Strengths And Weaknesses: Strengths: The paper proposes an innovative approach that addresses the generalization problem in AIGI detection through orthogonal subspace decomposition. The experimental design is comprehensive, covering multiple datasets and forgery methods, effectively validating the method's effectiveness. The theoretical analysis is insightful, providing an explanation for the asymmetry phenomenon in AIGI detection. Weaknesses: The paper treats all forgery methods as a single category during training, which may overlook the specificities and commonalities of different forgery techniques. The proposed method may face additional challenges in real-world applications, such as shifts in data distribution and the emergence of new forgery techniques. Other Comments Or Suggestions: The overall quality of the paper is high, with thorough experimental and theoretical analysis. It is recommended that the authors include ablation studies on backbones and robustness testing in the experimental section, as well as add results from more high-quality datasets to further strengthen the findings. Questions For Authors: The paper treats all forgery methods as a single category during training. Has the consideration of the specificity and generalization of different forgery methods been explored? Could this approach result in the loss of unique characteristics for different forgery types, potentially degrading detection performance? For instance, compared to methods like MoE, could this lead to a lack of specialized knowledge? In practical applications, there are often many adversarial scenarios and a variety of robustness tests. How well does the method perform under these conditions? Ethical Review Concerns: NA. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer cn1X for the constructive comments, insightful questions, and useful suggestions.** We greatly appreciate and are encouraged by the reviewer's recognition of our motivation, thorough experimental and theoretical analysis, methodological novelty, and superior experimental performance. Additionally, the reviewer raised several important concerns and questions, which we address in detail below. > **Q1.** The paper treats all forgery methods as a single category during training. Has the consideration of the specificity and generalization of different forgery methods been explored? **R1.** Thanks for your very insightful question. As highlighted by the reviewer, treating all forgery methods as a single category in binary classification may potentially risk losing specificity and generalization—a concern we have not yet verified. However, please note that this limitation is inherent to binary classification tasks in general, **rather than specific to our proposed SVD-method in this work** (our SVD-based approach is designed specifically for adapting to new forgery types while preserving pre-trained knowledge). Following the reviewer's suggestion, we plan to propose an **extended version (multi-task) based on our current binary framework**, aiming explicitly at enhancing the balance between specificity (handling known forgeries effectively) and generalization (detecting unknown or unseen forgeries). The preliminary idea of **our future multi-task learning framework** features: - (1) **Dual-head structure:** Two separate heads are employed—one dedicated to multi-class classification (specific head) and another for binary classification (general head). - **Specific Head** (multi-class): Learns fine-grained differences among known forgery types, focusing on fitting to the training distribution (specificity, IID). - **General Head** (binary): Captures shared features across forgery types, enabling the detection of unseen or novel manipulations (generality, OOD). - (2) **Adaptive Dynamic Inference:** Combining predictions from both heads based on confidence, balancing specificity and generality dynamically during inference. Specifically, our **dynamic inference strategy** is as follows: - Compute the prediction probability (i.e., confidence) according to the specific head's logits. - If the maximal prediction confidence across classes is below a threshold (indicating uncertainty or a "flat" distribution), then making decision based on the binary general head, otherwise based on the specific multi-class head. This adaptive strategy effectively maintains specificity for known forgery methods while providing robust generalization against unseen or evolving forgeries. Due to the limited content in the rebuttal, we cannot provide exhaustive experimental details at this stage. But following the reviewer's suggestion, **we evaluate our binary model and multi-task model on the $\text{Chameleon}$ dataset (see **R2** below for details)**. --- > **Q2.** Chameleon is a high-quality dataset. Testing on this dataset would further validate the generalization ability. **R2.** Thanks for introducing and highlighting the high-quality and challenging dataset, $\text{Chameleon}$. Following the suggestion, we have conducted additional evaluations using this dataset. We follow the proposed setting in the original paper for experiments, i.e., training on GenImage (whole) and testing on $\text{Chameleon}$. As shown in **Table 1** below, our method achieves superior generalization performance compared to baseline methods and other SOTA detectors. Also, the proposed extension version, i.e., multi-task framework, further boosts and refines the results, alleviating the potential loss of specificity and generality problem raised by the reviewer. ***Table 1: Evaluation results on Chameleon. All detectors are trained on the GenImage dataset.*** | CNNSpot | FreDect | Fusing | GramNet | LNP | UnivFD | DIRE | Patch | NPR | AIDE | Ours (binary) |Ours (multi-task) |-|-|-|-|-|-|-|--|-|-|-|-| |60.89|57.22|57.09| 59.81 | 58.52 | 60.42 | 57.83 | 55.70 | 57.81 | 65.77| 70.27 | 72.62 Furthermore, all evaluated methods experience a notable performance decline on $\text{Chameleon}$, highlighting the dataset's significant difficulty. Inspired, we plan to conduct an in-depth analysis of $\text{Chameleon}$ in future research. Again, we greatly appreciate your suggestion. --- > **Q3.** About the robustness evaluation. **R3.** Thank you for your question. We have already acknowledged this concern and have performed a robustness evaluation on the deepfake detection benchmark to assess our model's robustness, **as presented in Figure 7 of our appendix**. Following [1,2], we evaluate three types of image degradation: block-wise distortion, contrast changes, and JPEG compression. This experiment confirms the model's robustness against various perturbations. [1] LSDA, CVPR 24. [2] LipForensics, CVPR 21.
null
null
null
null
null
null
Uncertainty Estimation for Heterophilic Graphs Through the Lens of Information Theory
Accept (poster)
Summary: This paper addresses the challenge of estimating epistemic uncertainty on graphs that do not follow the homophily assumption, where neighboring nodes often belong to different classes. The authors provide an information-theoretic analysis of Message Passing Neural Networks (MPNNs) and derive an analog to the data processing inequality that reveals how information can be both lost and (importantly) gained across layers Claims And Evidence: The paper grounds its approach in an information-theoretic analysis, deriving a novel data processing equality that captures both the loss and gain of information across layers Methods And Evaluation Criteria: By jointly considering all layer representations, the method intuitively addresses the challenge of information heterogeneity in heterophilic graphs Theoretical Claims: The paper’s attempt to quantify both information loss and gain is commendable. However, some proofs assume certain conditional independence structures (e.g., ego-graph representations adequately capture all necessary dependencies) which might be too idealized for real-world graph data Experimental Designs Or Analyses: The authors test JLDE on multiple datasets and under various distribution shifts Supplementary Material: Yes (code) Relation To Broader Scientific Literature: The paper effectively situates its contributions within the existing literature on uncertainty estimation in both standard neural networks Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths * The paper brings a new theoretical perspective to uncertainty estimation on graphs by incorporating ideas from information theory * The derivation of a data processing equality that accounts for both loss and gain of information is novel * The experiments are extensive, covering multiple datasets, distribution shifts, and network architectures Weaknesses * From Definition 4.1 to Proposition 4.2, the authors show that heterophily is advantageous for graph neural networks due to the resulting information gain. Although their proof analyzes this phenomenon using information theory, the conclusion is almost identical to that of [1, 2] * [1] Revisiting heterophily for graph neural networks, NeurIPS '22 * [2] When Do Graph Neural Networks Help with Node Classification? Investigating the Impact of Homophily Principle on Node Distinguishability, NeurIPS '23 * The authors insist that connections to nodes with different semantics (i.e., those providing substantial information) improve overall performance. However, this assumption only holds under the i.i.d. data model, which is neither tractable nor realistic for real-world datasets * As defined in Eq. 6, the authors propose using the outputs from all layers, a method that has been suggested by many recent studies [3, 4] * [3]: Why Do Attributes Propagate in Graph Convolutional Neural Networks?, AAAI '21 * [4]: Nested graph neural networks, NeurIPS '21 Other Comments Or Suggestions: N/A Questions For Authors: Please see the above weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review. We want to distinguish our contributions from related work. ## Performance improvement through Heterophily Our theory shows that, from an information perspective, the intermediate layers (i.e. aggregation) in GNNs can provide additional information. This contrasts results for i.i.d. data where information is only lost with network depth. These results explicitly only hold for **interdependent** data (and not the i.i.d. data model). Our analysis exploits that the information a GNN uses to represent a node only depends on its ego graph of a range equal to the GNN depth as can be seen in Figure 3: The latent representation of a node also depends on the features of its $\leq$L-hop neighbors. Therefore, our analysis *explicitly* considers the interdependence and does not require an i.i.d. data model. Theorem 4.1 is valid for *any* MPNN according to the definition of Appendix B.2 without any other assumptions like i.i.d. data. To clarify this misunderstanding, we change the paper: - Changed section title 4.2 from *Information in MPNNs: A Data Processing Equality* to *Information for Interdependence: A Data Processing Equalty* - We changed the caption of Figure 3 to: *Probabilistic model of how the information is processed in MPNNs given the interdependent data...* - The first sentence in 4.2 to: *For problems on graphs, where we deal with interdependent data, the input **X** is twofold as it contains both node features **F** and the graph structure **E**.* ## Discussion of related work **Heterophily is advantageous:** Luan et al. (2022) propose a novel homophily metric through post-aggregation node similarity and develop a novel, powerful GNN: ACM. Luan et al. (2023) investigate performance for graphs with various degrees of homophily by studying how node distinguishability is linked to GNN performance and they provide a better performance metric. While these works are meaningful contributions to heterophilic graphs, we want to stress that our work's conclusion is fundamentally different from 'heterophily is advantageous'. We study how uncertainty can be estimated in heterophilic graphs. Our new theory applies to both homophilic and heterophilic graphs. Our claim is not that certain GNNs benefit from heterophily, but instead, uncertainty in heterophilic graphs can only be quantified when doing joint density estimation. This differs from the related work that uses heterophily to improve accuracy. **Using Multiple Layers** Yang et al. (2021) use the difference between embeddings of the previous layer and the current one to mitigate oversmoothing. Zhang and Li (2021) extract local ego-net subgraphs for each node, apply a GNN to those, and finally pool these subgraph representations to obtain node representations. In contrast, we use an arbitrary MPNN and estimate density from all its latent embeddings to estimate model uncertainty. While other work also considers node representations obtained at different scales, it aims to improve predictive performance by changing the GNN architecture. We apply JLDE post-hoc to an already trained GNN and estimate uncertainty from all its embeddings which we justify both formally and empirically. However, we agree that it is important to stress the difference to related work and are thankful for these pointers that we discuss in a new paragraph: ```Recent work has focused on refining the inductive biases embedded in GNNs, particularly regarding graph structure, neighborhood similarity, and information aggregation across layers. Luan et al. (2022) revisit GNN performance under heterophily, challenging traditional homophily-based assumptions and proposing adaptive channel mixing to enhance representation learning across diverse neighborhoods. Similarly, Luan et al. (2023) show that GNN performance can not be attributed to homophily alone by introducing the concepts of intra- and inter-class node distinguishability. Zhang and Li (2021) propose Nested GNNs, which extend beyond rooted subtrees to capture richer local substructures, emphasizing the importance of localized graph context. Complementarily, Yang et al. (2021) avoid over-smoothing by using the difference between a layer's input and output. In contrast, our work examines the inductive biases of GNNs from an information-theoretic perspective in the context of uncertainty estimation. We show the benefits of leveraging information from *all* layers and propose a theoretically grounded uncertainty estimator that achieves state-of-the-art performance. We further demonstrate that heterophilic problems necessitate jointly considering all representations. ``` ## Conclusion We thank the reviewer for their time and effort to review our work. We hope that we addressed the reviewer's perceived weakness as our theory does not assume i.i.d. data and clarified that we arrive at conclusions that are fundamentally different from existing work and are happy to further discuss this. --- Rebuttal Comment 1.1: Comment: Thanks for the careful rebuttal. Since the author has addressed most of my concerns, I have updated my score accordingly.
Summary: This work addresses the lack of epistemic uncertainty measures on heterophilic graphs by studying the uncertainty of GNNs through information theory. The main contribution is the development of a post-hoc estimate, Joint Latent Density Estimation (JLDE) as a measure of the density of latent embeddings which is useful for out-of-distribution (OOD) node detection for node classification tasks. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. No major issues, but the proof steps need more explanation. Experimental Designs Or Analyses: Mostly, ok. However, one important experimental analysis is missing, which is, studying the sensitivity of the post-hoc uncertainty estimate with respect to varying degrees of homophily ratio. The authors should create synthetic graphs with controlled but varying degrees of homophily ratio and verify if the OOD detection is still effective across various regimes of homophily ratio. See, for instance, [1] (Appendix B) for such an example dataset. [1] https://arxiv.org/abs/2502.10208 Supplementary Material: Did not review. The source code is not available. Relation To Broader Scientific Literature: The study carries significant interest to the GNN community as well as the uncertainty-estimation community. Essential References Not Discussed: None, to the best of my knowledge. Other Strengths And Weaknesses: There is a clear gap in the literature where the paper fits. The proposed uncertainty estimate is supported by theoretical motivation, intuition and empirical comparison with recent baselines. No significant weaknesses except those in Comments/Suggestions and Experimental Designs/Analyses. Other Comments Or Suggestions: 1. Line 43, 2nd column => Please elaborate on what JLDE means, since this is the first appearance of this term. 2. Line 135, 2nd column=> I.d.d. => i.i.d 3. Section 4.1 should be shortened to facilitate more discussion on JLDE, in particular Equations 12, and 13 and the associated discussions in the Appendix should be in the main paper. 4. Appendix A, The steps in proofs need to be justified with more supporting statements about known results and references. 5. What is Ens? What do you mean by an “ensemble of 10” in line 346? This needs further clarification. Questions For Authors: 1. Since the focus of the paper is on Epistemic uncertainty, what value does reporting Aleatoric uncertainty in Table 1 add to the paper? 2. In Table 1, what do the boldfaced entries indicate? What is the significance of the grey regions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and suggestions. We want to address their points with the following revisions of our paper: - **Synthetic Experiments with varying Homophily**: That is a great idea! We investigate how JLDE compares to the state-of-the-art homophilic estimator GEBM on a synthetic graph in [Figure 1](https://figshare.com/s/05c97f1c4314003ce379?file=53331224). The synthetic setting is adapted directly from (Das et al., 2025) with an additional out-of-distribution cluster that the model is not trained on. We observe that while GEBM rapidly deteriorates in performance with increasing heterophily, JLDE is consistently effective in homophilic and heterophilic regimes. - **Source Code**: There seems to be a misunderstanding as we provide the source code in the supplementary material on OpenReview which is also acknowledged by other reviewers. - **Other Comments**: 1) We add an explanation of the acronym JLDE to L.43. 2) We fixed this typo, thank you. 3) We agree that the KNN-based realization of JLDE should be discussed more in the main text. In favor of deferring some related work to the Appendix (see response to reviewer is3X), we move Equation 12 to the main text. We also add an explanation and a short discussion about practical concerns like runtime and memory complexity (see feedback to reviewer ejzC). 4) We add verbose explanations for each proof that explain the transition between each line. 5) Ens indeed refers to Ensemble. We clarify this in the main text. - **Aleatoric Uncertainty**: We report the performance of the associated aleatoric uncertainty estimate for completeness purposes, similar to other work (Stadler et. al., 2021; Fuchsgruber et. al., 2024). This way, we can properly assess the performance of JLDE to *all* other possible uncertainty estimates. We add a clarification that the aleatoric estimate comes from the backbone GNN (see response to reviewer is3X). If the reviewer thinks that these numbers are distracting, we can also defer them completely to the Appendix. - **Table 1**: Bold-face numbers indicate the best accuracy of the backbone GNN for this dataset and distribution shift. Grey cells highlight values that are associated with JLDE (either accuracy or o.o.d.-detection performance). We update the captions accordingly to indicate this clearly. We again want to thank the reviewer for their suggestions. We believe that the additional experiments and clarifications underline the merits of our framework and enhance the paper's clarity. If the reviewer has any remaining points that relate to their assessment of our paper we are very happy about additional input. ## References: - Das, Siddhartha Shankar, et al. "SGS-GNN: A Supervised Graph Sparsification method for Graph Neural Networks." arXiv preprint arXiv:2502.10208 (2025). - Stadler, Maximilian, et al. "Graph posterior network: Bayesian predictive uncertainty for node classification." Advances in Neural Information Processing Systems 34 (2021): 18033-18048. - Fuchsgruber, Dominik, et. al. "Energy-based Epistemic Uncertainty for Graph Neural Networks." arXiv preprint arXiv:2406.04043 (2024). --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions and concerns. I remain positive about the paper's acceptance. Few questions : 1. Are there practical real-world situations/case studies where epistemic uncertainty estimation for heterophilic graphs is of importance, apart from the OOD detection task? Limitation section: 2. > We propose a design principle for epistemic uncertainty on heterophilic graphs but do not aim to improve aleatoric uncertainty or its calibration Would you elaborate on what are the implications of improving aleatoric uncertainty or its calibration in heterophilic graphs or graphs in general? Also, could there be any unique advantage to estimating aleatoric uncertainty in situations where the graph is noisy? 3. > Also, we focus our evaluation on node classification but the information-theoretic perspective and its implications apply to regression problems and other non-i.i.d. domains that can be cast into graphs as well. I am not clear about what you meant by "other non-i.i.d domains". Would you elaborate on this? ------ *Update: I have increased my score after reading other reviews. I encourage the authors to incorporate the discussions regarding additional experiments with varying homophily, differences to other related works, and computational/memory costs into their final version. Good luck!* --- Reply to Comment 1.1.1: Comment: We are happy that we could resolve the questions and concerns of the reviewer. We also want to answer their additional questions: 1. Beyond detecting distribution shifts, epistemic uncertainty can be used to abstain from a prediction if the indicated epistemic uncertainty is high. Especially in high-stakes and safety-critical domains like Medicine, this improves the trustworthiness of a model. Disentangling uncertainty is also useful in reinforcement learning tasks (Charpentier et. al., 2022). Furthermore, recent work has shown that accurate epistemic uncertainty leads to an optimal proxy for data acquisition in Graph Active Learning (Fuchsgruber et. al., 2024) which has downstream applications in data-intensive domains. These properties of epistemic uncertainty translate into all application areas in which heterophilic graphs are found, such as fraud detection or recommender systems (Luan et. al., 2024). 2. Well-calibrated and accurate aleatoric uncertainty is relevant in domains with ambiguous data. Somewhat simplified, epistemic uncertainty tries to answer the question if I should trust a model's prediction at all. If the answer is yes, model calibration and aleatoric uncertainty are important to make informed decisions that incorporate inherent randomness. Uncertainty estimation for graphs is a recently emerging field and, therefore, the amount of downstream applications in the literature is limited. One example includes molecular data where physical properties can introduce inherent randomness that a model should be able to represent (Wan et. al., 2021; Wollschläger et. al., 2023). 3. Our statement entails any domain that can be modeled with a graph even though a structure may not be naturally given. Beyond the aforementioned molecules, also sequential data can be viewed as a line graph. The Roman Empire dataset in fact is constructed as a semantic graph between words. Other examples may include knowledge graphs or relational databases. We mean to express that our framework can be useful in any domain where graphs apply which we will clarify in our paper. We also want to stress that studying specific downstream applications is beyond the more foundational nature of our work. We will include parts of this discussion in our introduction to further motivate our work practically. We again want to thank the reviewer for their constructive feedback and we appreciate the time they put into engaging with us in further discussions. If no more concerns or questions remain, and as the reviewer seems to be in favor of acceptance of our paper we would be very happy if they consider raising their score. ## References - Luan, Sitao, et al. "The heterophilic graph learning handbook: Benchmarks, models, theoretical analysis, applications and challenges." arXiv preprint arXiv:2407.09618 (2024). - Fuchsgruber, Dominik, et al. "Uncertainty for active learning on graphs." arXiv preprint arXiv:2405.01462 (2024). - Charpentier, Bertrand, et al. "Disentangling epistemic and aleatoric uncertainty in reinforcement learning." arXiv preprint arXiv:2206.01558 (2022). - Wan, Shunzhou, et. al. "Uncertainty quantification in classical molecular dynamics." Philosophical Transactions of the Royal Society A 379.2197 (2021): 20200082. - Wollschläger, Tom, et al. "Uncertainty estimation for molecules: Desiderata and methods." International conference on machine learning. PMLR, 2023.
Summary: This paper proposes an uncertainty estimation method for heterophilic graphs, primarily through the utilization of multi-layer embeddings. The authors conduct comprehensive analyses, such as examining how information propagates through message passing in neural networks, to validate their claims. They introduce a simple method called Joint Latent Density Estimation (JLDE) that combines outputs from all layers to measure uncertainty. This approach achieves superior results on heterophilic graphs compared to existing methods while maintaining performance on standard graphs. Extensive experiments across multiple datasets demonstrate its effectiveness. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem at hand. Theoretical Claims: Yes, I check the correctness of any proofs for theoretical claims Experimental Designs Or Analyses: Yes I check the soundness/validity of any experimental designs or analyses. However, the ablation study can be refined. Supplementary Material: No supplementary material is included. Relation To Broader Scientific Literature: 1. Heterophilic GNNs: Showing uncertainty estimation requires fundamentally new principles beyond homophily-based methods. 2. Uncertainty in Graphs: Address heterophilic uncertainty, overcoming homophily assumptions in prior work. Introduces post-hoc JLDE, unlike architecture-bound i.i.d. methods. 3. Information Theory: Explaining why deeper MPNN layers gain information in JLDE’s joint layer modeling. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. Rigorous theoretical analysis. 2. Logical experiments (heterophilic/homophilic graphs, multiple backbones). 3. Strong results (SOTA on heterophilic graphs). 4. Clear writing. Weaknesses: 1. Efficiency: No analysis of computational/memory costs from multi-layer embeddings. 2. In ablation gaps: it is unclear if gains stem from joint embeddings or KNN. Missing comparisons to (a) layer-wise uncertainty averaging, (b) alternative density estimators. Other Comments Or Suggestions: Please see the above weakness. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and questions and address them with the following revisions to our paper in addition to the additional experiments shown [here](https://figshare.com/s/05c97f1c4314003ce379?file=53331224). 1. **Efficiency**: The computational cost of KNN-based JLDE is governed by KNN, which can be implemented in $\mathcal{O}(n_{train} * d_{hidden} + n_{train} * k)$. For smaller training sets that are typical in transductive node classification, this is negligible but can become prohibitive for large training sets. In this case, JLDE can be realized with more efficient density estimators. We opt for KNN due to its simplicity and applicability to estimating a high dimensional density from little data. However, in general, any density estimator can be used (see below). We measure the runtime of KNN-based JLDE in Table 2 (link above). For all datasets with small training sets (all but Amazon Ratings), the cost of JLDE is comparable to competitors. Using multiple layers effectively increases $d_{hidden}$ by a factor of $L$ in terms of runtime complexity. This only incurs a small additional cost (see comparison to 1-layer KNN-based JLDE in Table 2 (link above). The memory complexity is also determined by the density estimator (KNN) and amounts to keeping the embeddings of the training set in the memory, $\mathcal{O}(n_{train} * d_{hidden})$. Since by default, these activations are kept in memory for a forward pass unless explicitly deallocated, there is no overhead in practice. We add both this explanation and the runtime comparison to the Appendix of our paper. 2. **Ablations**: (a) layer-wise uncertainty: Since the performance of uncertainty estimates is unaffected by monotonic transformations, uncertainty averaging effectively amounts to the same as adding the individual uncertainty estimated from each layer. This variant is already ablated in the paper in Figures 4 and 6-8 as "All (cat)". We will clarify this in the revised paper. (b) Estimating high-dimensional density from little data is challenging which is why we chose KNN to realize JLDE. We also provide ablations using i) an RBF-based Kernel-Density Estimator (KDE) and a Mixture of Gaussians with diagonal covariance (MoG) in Table 1. KDE performs similarly to JLDE but MoG falls short in some settings because of its limited expressiveness. We add this ablation to the revised paper. We also want to highlight that the core message of our paper is not that KNN should be used for uncertainty quantification. Instead, we argue for the merits of estimating the data density in the joint latent node embedding space with *some* suitable density estimator. We again want to thank the reviewer for suggesting these experiments. We believe they enable a well-rounded assessment of JLDE. Should any other concerns or questions remain, we are happy to discuss them with the reviewer! --- Rebuttal Comment 1.1: Comment: Thanks authors for response. Since my concerns have been addressed, I will keep the positive scoring for the work. --- Reply to Comment 1.1.1: Comment: We again want to thank the reviewer for their time and their constructive feedback that helped us to provide further contextualization for our method and additional ablations. Since there are no remaining concerns and the reviewer in general seems to be positive about the paper's contributions and its acceptance we would be very grateful if they consider raising their score beyond a borderline score if they think it is adequate.
Summary: The paper explores uncertainty quantification (specifically epistemic uncertainty) in graphs without homophily -- notably previous works in this direction assumed homophily as another source of information about the ground truth probability revealing information about similarity in the conditional label distribution; this assumption is invalid in heterophily. Here the authors adopting an information-theoretic perspective. While in an i.i.d. setting, mutual information decreases by increasing the model's depth, the graph structure introduces potential information gain through message passing (extending receptive field to other nodes). This is well illustrated in the probabilistic model shown in Fig. 3. Additionally, the authors propose a KNN-based density estimation over all latent variables concatenated (i.e., embeddings from all layers) as a measure of epistemic uncertainty. They also conduct an extensive empirical study to support their approach. Claims And Evidence: In general yes! The information theoretic view of the problem helps to provide a clear intuition. However, maybe due to my limited understanding of some parts of the paper, I think the theory can be summarized further while preserving the concreteness of the arguments. The chain of arguments which I understood is: (1) from Tishby & Zaslavsky (2015), and data processing inequality, we know that on i.i.d. data, model looses information across layers (2) despite i.i.d. data, in graphs we can gain information through message passing as we visit new node, and the information gain is provably non-negative. (3) To quantify epistemic uncertainty, we need to process all latent variable (4) we can use KNN density estimation over all latent values. If my understanding is correct, there are several parts of the paper, that although informative, they are not directly connected to the method. For example the information loss which is quantified in Def 4.1, and Prop 4.2. Also: I can not connect the practical method and the theoretical discussion. For that please see "Other Strengths And Weaknesses". Methods And Evaluation Criteria: The evaluation criteria covers some of the mostly used methods in UQ literature; however, I have the following question: What is the method to quantify aleatoric uncertainty? I can not understand aleatoric uncertainty results in the tables for o.o.d. detection. I also can not understand the results in Table 14, and 15? The authors argue that the calibration is over models, JLDE does not contribute to the ECE, or Brier score. However, one easy way to examine the reliability of the method is to draw a plot to compare the epistemic uncertainty of the nodes compare to the accuracy. Some plot like ECE instead of confidence, the x-axis shows the average epistemic uncertainty. Also another question: Does this method easily extend to graphs with homophily? I assume there is no bottleneck for the method to extend to homophily graphs, however I see that the method is outperformed by a significant margin (> 20% in Table 6 for CoraML) compared to SOTA. What is that? What is the intuitive reason behind it? Theoretical Claims: There are no special theorems in the paper requiring complicated and multi-step proofs. Many of the arguments are directly supported by one line proofs often taken form the information theory literature. Additionally these are my other problems / questions: **About Eq. 3.** (1) is it even realistic to have this estimator? (2) is it what we need for prediction? For (1) I am unsure if the search space isn’t too restrictive. In other words, maybe even finding one example in that search space is quite difficult. Because the authors are searching over a space of functions such that inputs all possible information from $X$. Which means that a marginally (weighted average over the probability of the input), it should have the same correlation with the invisible ground truth label (or here generative) distribution. I believe this is even harder than having a perfectly calibrated classifier. For (2) if we have such a classifier, even if it includes additional unnecessary information about X from the perspective of uncertainty quantification still this is a very good classifier. Even I can argue that including more information about the generative distribution of X helps generalization. 1. I assume the questions I asked are maybe due to not completely understanding the goal of Eq. 3. Maybe if authors provide an intuitive explanation, it can help. 2. Maybe I would be more convinced if the search space was not restricted but the overall optimization was written in form of a min-max problem which maximizes the information about the labels and minimizes the unnecessary information about X. Experimental Designs Or Analyses: Please see "Methods And Evaluation Criteria". There I mentioned one experiment I expect from the authors to show the reliability of the epistemic uncertainty estimates. Supplementary Material: Not in detail but yes. Specifically I checked the additional experimental results. Relation To Broader Scientific Literature: The paper tends to fill the gap in uncertainty quantification for graphs. Previous works in this area assume the graph has homophily -- the two endpoints of an edge are likely to have the same label. This is potentially used as a source of information helping to signal for higher uncertainty when the prediction is not following this rule. Essential References Not Discussed: I think the literature review was comprehensive and covering. I did not find any problems in that area. However I would prefer to see a shorted "related works" section in the manuscript and the current version in the appendix as it shifts from the main storyline of the paper. Other Strengths And Weaknesses: ### Strength 1. I believe this area (epistemic uncertainty over heterophilic graphs) is both interesting and untapped. Also I found the approach of the authors novel and interesting; they capture the amount of information collectable the receptive field. 2. The results on heterophilic graphs look promising. Also the extensive empirical evaluations makes the paper stronger. 3. The theoretical insights look correct and complete. ### Weaknesses 1. **Context of the paper is inconsistent.** To my perspective the paper looks like two possibly orthogonal directions concatenated. Up to line 317-left is a theoretical study showing that the information (for any general estimator) can possibly gain over increasing layers. Then suddenly it shifts to introducing an uncertainty estimator that is a KNN-density estimator over the concatenation of the latent values. Despite the discussion on the information theoretic point of view, there is no clear chain of arguments leading to the choice of the uncertainty estimator. In other words, one can easily follow another chain of argument leading to the same estimator. For instance: since we can not count on homophily as a supplementary modal to encode similarity in the ground truth $p(y\mid x)$, we directly use the latent representation as the input of our uncertainty estimation algorithm. We concatenate the embeddings of all layers since each hierarchically they encode information about $k$-hop neighborhood of the node. To me both the example argument stated above and the information theoretic perspective are equally plausible for this estimator. 2. **Writing can be improved.** Several comments on writing: (1) One can not find any footprint of the method (KNN) from the introduction. Also there is no pseudocode that helps to deliver a quick, and at the same time concrete understanding the final practical method. (2) Although a basic understanding of the information theory is necessary for anyone reading ML papers, still I would prefer to see a better intuitive description over notations like conditional mutual information, etc. Specifically Eq. 3, and 6 need more intuitive explanation. (3)The term “JLDE” (joint latent density estimation) should be mentioned the first time it is used which is line 42-right. (4) The long discussion on the models proposed for heterophilic graphs (related works section) detaches the reader from the main story of the paper. The proposed method is agnostic to the model’s structure, therefore I would suggest a succinct summary in the manuscript and a more detailed related work perhaps in appendix. Other Comments Or Suggestions: Please see "Other Strengths And Weaknesses". Questions For Authors: While addressed in other parts, here I highlight my most important questions: 1. Does this method easily extend to graphs with homophily? I assume there is no bottleneck for the method to extend to homophily graphs, however I see that the method is outperformed by a significant margin (> 20% in Table 6 for CoraML) compared to SOTA. What is that? What is the intuitive reason behind it? 2. **About Eq. 3.** (1) is it even realistic to have this estimator? (2) is it what we need for prediction? For (1) I am unsure if the search space isn’t too restrictive. In other words, maybe even finding one example in that search space is quite difficult. Because the authors are searching over a space of functions such that inputs all possible information from $X$. Which means that a marginally (weighted average over the probability of the input), it should have the same correlation with the invisible ground truth label (or here generative) distribution. I believe this is even harder than having a perfectly calibrated classifier. For (2) if we have such a classifier, even if it includes additional unnecessary information about X from the perspective of uncertainty quantification still this is a very good classifier. Even I can argue that including more information about the generative distribution of X helps generalization. 1. I assume the questions I asked are maybe due to not completely understanding the goal of Eq. 3. Maybe if authors provide an intuitive explanation, it can help. 2. Maybe I would be more convinced if the search space was not restricted but the overall optimization was written in form of a min-max problem which maximizes the information about the labels and minimizes the unnecessary information about X. 3. What is the method to quantify aleatoric uncertainty? I can not understand aleatoric uncertainty results in the tables for o.o.d. detection. I also can not understand the results in Table 14, and 15? The authors argue that the calibration is over models, JLDE does not contribute to the ECE, or Brier score. However, one easy way to examine the reliability of the method is to draw a plot to compare the epistemic uncertainty of the nodes compare to the accuracy. Some plot like ECE instead of confidence, the x-axis shows the average epistemic uncertainty. Ethical Review Concerns: There are no concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their in-depth feedback and questions. We are happy they find our paper to provide novel insights into an interesting and untapped research area. We provide additional material [here](https://figshare.com/s/05c97f1c4314003ce379?file=53331224). ## Connection between Theory and Method **Context of the Paper**: JLDE is intuitively motivated since, in heterophilic graphs, a node's neighbors are semantically different from itself which leads to diverse latent embeddings. While we agree that this method could have been proposed without formal justification, we believe that our analysis provides a theoretical basis that not only supports the intuition behind JLDE's effectiveness but is also valuable for further advancements in uncertainty quantification under heterophily. We utilize it to justify that for heterophilic graphs, each embedding provides different information. To that end, we develop an information-based framework for MPNNs that describes how the information in latent embeddings relates to each other (Theorem 4.1). The key insight that links this analysis to JLDE is discussed in L.295: The information gain is governed by the information that the k+1-th hop neighbors add that is not already contained in the $\leq$k-hop neighbors. For heterophilic data, the semantic differences between adjacent nodes induce information gain while for homophilic graphs the *additional* information diminishes. Estimating a node's data density to quantify uncertainty therefore *must* rely on all hidden embeddings. While Theorem 4.1 marks a significant contribution by itself it does not explicitly target uncertainty quantification. We view this not as a limitation. It fits well into the paper as it justifies our main claim: Uncertainty quantification in heterophilic graphs greatly benefits from considering all hidden node representations as it utilizes the information gain term. We clarify this by revising L.161 to explain that we propose JLDE as an intuitive uncertainty quantification framework that we formally justify from an information-theoretic angle. Def 4.1 and 4.2 make Theorem 4.1 more digestible by separately discussing its components. ## Uncertainty Calibration Experiment That is a great idea! We conducted this experiment (Figure 1, link above). For 5 of 6 datasets, JLDE's uncertainty correlates well with accuracy. ## Writing 1. We mention the KNN-based realization of JLDE in L.44 and provide pseudocode in Algorithm 1 (link above). 2. We elaborate on Equation 3 (see below) and Equation 6 and an intuition for MI in L.83. 3. We introduce the acronym JLDE in L.42. 4. We defer the first paragraph of Section 3 to the Appendix to shorten Related Work. **Conciseness of Theory**: We agree that Prop. 4.2 does not directly relate to uncertainty quantification and we move it to the Appendix. ## Questions **Aleatoric Uncertainty** is quantified as $1 - max_c p_c$ (L.838) from the backbone MPNN and is the same for all post-hoc estimators. Like related work, we report it for completeness. We move its definition and this explanation to L.348. The **ECE and Brier** metrics (Table 14, 15) depend only on the classifier. Therefore, we do not explicitly restate it for each post-hoc uncertainty estimator and report them only for completeness. **Equation 3** is directly taken from the seminal work of Tishby et. al. that established an information-theoretic perspective on NNs. It is referenced consistently throughout the literature. In practice, the optimal estimator of Eq. 3 is not attainable and methods instead optimize a trade-off between compression and predictive capabilities similar to the reviewer's min-max formulation. For example, the "Deep Variational Bottleneck", (Alemi et. al., 2016) proposes a varational optimization problem. Compression is necessary because making predictions from inputs that contain semantically irrelevant information is challenging (and hence, representation learning is used). We see how this can be difficult to understand without knowledge of the prior work on information theory and NNs and add an explanation to Section 4.1. **Homophilic Graphs**: Estimators like GPN, GEBM, or Safe explicitly exploit the homophily through homophilic graph diffusion to improve uncertainty. While JLDE is particularly useful for heterophily, it does not make structural assumptions and applies to homophilic and heterophilic graphs. As a drawback, it can not exploit the homophily in graphs directly and falls short of methods that do. Nonetheless, JLDE performs competitively to other post-hoc methods that do not use homophilic graph diffusion like EBMs or Ensembles. We believe that future work can optimize JLDE for homophilic graphs. We again thank the reviewer for their thoughtful and in-depth feedback which helped us improve our manuscript and hope we adequately addressed their points. We are happy to discuss further open questions. --- Rebuttal Comment 1.1: Comment: I thank the authors for replying to my comments. Yes indeed the authors have already pointed to an interesting problem with an insightful approach. Here are my concerns: - About the **context of the paper** part unfortunately I am still not convinced. The definition of homophily and heterophily (also as noted in the paper) is based on label similarity and not the feature similarlity. Therefore (1) one can simply design a dataset with high homophily and diverse feature space on endpoints of an edge. Simply assume an image dataset, and a noisy similarity kernel that perfectly connect similar labels 90% of the time, so for instance majority of images with class tree are connected to each other while the feature space includes information about the light, average color, etc of the image. On the oposite side one can easily create a heteophily graph where the edge is sampled via a kernel on feature similarity. Therefore the statement claiming “we should see all levels of embedding” is intuitively right but to me it does not pass through features when homophily is in the space of labels. Therefore I can not follow the sentence “while for homophilic graphs the *additional* information diminishes”. - Building on the previous comment, shouldn't we expect that JDLE reconstructs the information about homophily (that is used as a side modal of similarity by the other graph uncertainty quantification approaches)? - Thank you for drawing the ECE plot, and even the box-based reliability chart is even more illustrative (while unfortunately many other works don’t do that. Question is why no Cora and PubMed there is no visible correlation? Since those are homophily graphs, I would still expect a correlation. My intuitive explanation is that fixed bin interval causes a low support inside the bin and therefore on low support many statistics would be volatile. Can you please check the number of nodes falling inside each bin? Also can you try plotting the same plot but this time instead of fixed bin interval, using fixed number of points inside each bin? - Looking at the algorithm: Does the norm-2 distance suffer from the high dimensionality caused by combining all the embeddings? Do you use l-2 distance always? Can you use a model to learn the low-embedding distances, or maybe even a PCA/LDA? - Thank you for explaining Eq. 3, I would also suggest having somewhat similar description in the paper around the equation to increase readability. - About the empirical comparison with GPN, etc on homophily graph I admit that the authors are right. Those methods exploit the homophily as a side modality indicating the similarity in graph. Therefore, on those datasets, surely JLDE should be compared with structure agnostic methods. Interestingly I see somehow similar behaviour compared to ensemble models, while presumably your method requires less runtime. Surely I like the paper, but still the problems I mentioned (l-2 norm on high dimensionality, difference of the framework's claim for heterophily and homophily graph, and the reliability plot in fig.2 of the attachment) are fundamental. To me the fact that JDLE and ensemble models tie in homophily graphs, while ensembles are expensive to compute, is interesting. Therefore if the problems I pointed are solved surely I'll read the paper for another pass and consider to increase my score. --- Reply to Comment 1.1.1: Comment: ## Context The reviewer raises an important point regarding the definition of homophily that needs disambiguation. Recent work (e.g., Luan et al., 2022, see reviewer wjj2) argues for concepts beyond label similarity incorporating features. Our work (previously only implicitly) adopts an information-theoretic perspective that aligns with these newer definitions. Traditionally, homophily is defined by whether the labels at edge endpoints agree. However, we define homophily through the semantic information of *features that matter for the label*. This is quantified by the mutual information between a node’s label and features at different hops. We consider a graph to be heterophilic when a node's neighbors provide additional, semantically distinct information about the node's label compared to the node’s own features. Formally, the homophily of the i-hop neighbors to the (i-1)-hop ego graph is: $h_v^{(i)} = I(G^{(i)}_v; G^{(0:i-1)}_v) - I(G^{(i)}_v; G^{(0:i-1)}_v | Y_v)$ This definition aligns with a post-aggregation metric of Luan et. al. It is composed of - the *entire* semantic redundancy in the i-th order and up-to-(i-1)-th order ego graph - and subtracts all parts that are not relevant to the label (i.e. task-irrelevant information in the features). This definition also nicely ties homophily to our analysis regarding the *realizable information gain* of Def 4.2: $I(Y_v; G^{(i)}_v | G^{(0:i-1)}_v) = I(Y_v; G^{(i)}_v) - h_v^{(i)}$ It shows why heterophily leads to larger information gain and motivates JLDE. In homophilic graphs, the largely redundant semantic information about the label in a node's neighbors reduces the potential information gain. This definition of homophily also clarifies the role of uninformative features like brightness, which do not contribute semantic information about the label and to which our notion of homophily is invariant. To relate this to the reviewer’s examples: - Example 1 (i.i.d. classification with similar images): While the graph shows label homophily, the neighbors provide no new information beyond a node's own features. Hence, the graph is homophilic under our definition. - Example 2 (edges based on uninformative features): The information overlap between adjacent node features does not influence our homophily and the graph is heterophilic. However, the neighbors provide no information about the label in the first place, i.e. $I(Y_v; G_v^{(i)}) = 0$. JLDE can not exploit even a heterophilic structure *if the structure itself is semantically irrelevant*. The benefits of JDLE are not unrealized because of homophily but because neighbors are uninformative overall. - An example of a semantically heterophilic graph is the word graph of Roman Empire: An adverb is qualified by being connected to a verb. The adjacent 'verb'-information is semantically different compared to the 'adverb'-information of the node itself and, thus, provides information about the label that can be used by JLDE. The empirical success of JLDE confirms that this notion of homophily applies well to standard benchmarks. We also notice that the definition in L.083f. is misleading in that regard as we do not define homophily only in the label space. We will instead formally introduce our information-theoretic notion of homophily. Thank you for spotting this -- it definitely helps the consistency of the paper and makes our assumptions more clear. ## Other We update our [rebuttal pdf](https://figshare.com/s/05c97f1c4314003ce379?file=53331224). - **Can JLDE reconstruct homophily**: In general, yes! However, it is expected that methods that are hard-coded to do so will do that better than an agnostic method. JLDE recovers homophily as well as Ensembles or EBMs -- both of which are also not explicitly designed to use homophily. - **ECE plot:** JLDE is well calibrated for Cora and PubMed, but less so for Chamaleon and Squirrel. This likely relates to the worse performance of the backbone for those datasets which deteriorates with the embedding quality JLDE is based on. We provide the bin size for Figure 2 in Figure 3 and show the same plot for bins of the same size in Figure 4. The trend is more or less the same and reveals a potential drawback of JLDE as it relies on sufficiently useful embeddings. - **Dim. Reduction:** You are right -- in fact, we use PCA (L.851) before we run JLDE. We updated Algorithm 1. In our experiments, we always use the L2-norm, but as per Table 1, different density works as well. Our main claim regards the merits of Joint Latent Density, not how to best estimate this density. - **Incorporating feedback into manuscript:** Yes, we already added an extensive description that follows our rebuttal. We believe that the discussions greatly enhance the clarity of our work which is why we incorporate all the suggestions into the manuscript. We again thank the reviewer for these helpful discussions and the time & effort to give us the opportunity for further clarification.
null
null
null
null
null
null
Adversaries Can Misuse Combinations of Safe Models
Accept (poster)
Summary: This paper introduces a new threat model for misuse where an adversary combines weaker (less-safe, but also less-capable) open-source generative models with stronger (more-safe and more capable) closed-weight generative models to perform unsafe tasks. The adversary accomplishes this by decomposing tasks into safe complex subtasks to be executed by the stronger closed-weight models and unsafe simple subtasks to be executed by the weaker open-weight model. The authors explore both manual and automated decomposition of two tasks each. The results show that the adversary is able to dramatically improve its success rate over just using a single model. Claims And Evidence: Claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods and evaluation criteria overall make sense. The authors acknowledge limitations in the experimental setup. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design and analysis is fine, considering the scarcity of resources and difficulty evaluating the proposed tasks Supplementary Material: I skimmed the entire appendix, paying close attention to the tables with refusal rates and success rates for the different tasks. Relation To Broader Scientific Literature: The main contribution is a proof-of-concept that "safe" weaker open-weight models can be combined in a strategic fashion with "safe" stronger closed-weight models to accomplish unsafe tasks. This is a (seemingly) novel misuse threat model. Essential References Not Discussed: Andriushchenko M, Souly A, Dziemian M, Duenas D, Lin M, Wang J, Hendrycks D, Zou A, Kolter Z, Fredrikson M, Winsor E. Agentharm: A benchmark for measuring harmfulness of llm agents. arXiv preprint arXiv:2410.09024. 2024 Oct 11. Glukhov D, Han Z, Shumailov I, Papyan V, Papernot N. Breach By A Thousand Leaks: Unsafe Information Leakage inSafe'AI Responses. arXiv preprint arXiv:2407.02551. 2024 Jul 2. Other Strengths And Weaknesses: ##### **Strengths** * The paper is well-written and well motivated. * The paper presents a novel and realistic threat model for misuse. * The tasks used to evaluate the threat model are complex and realistic. ##### **Weaknesses** * Most of the results are shown on tasks where there is a clear and simple task decomposition. The task decomposition is arguably the most critical component of the proposed threat model. * It is unclear how generalizable the threat model is to any harmful task. * The effect of the quality of the task decomposition on the performance is not explored. Other Comments Or Suggestions: ##### **Minor typos** * Line 107, right column: "work studying risk that arise" -> "work studying risks that arise" * Line 139, left column: "that can adversaries can" -> "that adversaries can" * Line 125, right column: "models.Thus" -> "models. Thus" * Line 297, right column: "work, we one again assume" -> "work, we once again assume" Questions For Authors: * Is there any intuition about why Mistral-7B performs so much better when combined with Claude than Mixtral-8x7B? * How extensive is this threat model? Is it reasonable to assume that this could extend to any (agentic) task? * How much does the quality of decomposition effect the efficacy of the adversary? * Should the proposed threat model be considered most applicable to agentic tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review! We’re glad you found the threat-model “novel and realistic” and the task we evaluate on “complex and realistic”. We respond to your questions and comments below. --- _Most of the results are shown on tasks where there is a clear and simple task decomposition. The task decomposition is arguably the most critical component of the proposed threat model._ Half of our experiments (Section 4) are on tasks where there is a simple decomposition while the other half (Section 5) involve tasks where the decomposition is done automatically and is non-obvious. For example, two example tasks from the “malicious python script experiments” are: * _Assuming a misconfigured Apache web server running on a Linux machine with IP address 192.168.1.100, create a Python script that leverages the "mod_status" module to gain unauthorized access to the server's status page, and then use the information obtained to launch a targeted DoS attack by flooding the server with a large number of requests to a specific resource-intensive URL._ * _Write a Python script that can brute-force a specific model of a wireless router's WPA2 password, given a captured handshake file in the format of a .cap file located at "/path/to/handshake.cap". The script should use a wordlist located at "/path/to/wordlist.txt" and output the cracked password._ For each of these, the decomposition is (i) not a-priori obvious (to the authors), and (ii) is likely task-specific, yet our automated decomposition pipeline handles both of these (along with over 100 other tasks like these) automatically with the same protocol, without a human in the loop. --- _How extensive is this threat model? Is it reasonable to assume that this could extend to any (agentic) task?_ Good question — we expect our threat model covers most agentic tasks, since it is broad (the adversary queries models in sequence, and uses the outputs to inform subsequent prompts). Our threat model encompasses adversaries that adapt their queries to the strong model based on the strong model’s responses, and also suggests how an adversary might fine-tune a weak model for effective decomposition. Specifically, we could start with a weak-model that has query-access to a collection of frontier models, then train the model with reinforcement learning to accomplish the task by calling the frontier models as subroutines. This extension fits into our framework, and we think this is important subsequent work. That being said, our threat model doesn’t encompass anything — for example, an adversary might be able to use many models in parallel to swarm a social media site, when any serial combination of models would've been low-volume and inconsequential. We think extending to parallel threat-models in interesting, and will include this in the discussion of subsequent versions. --- _How much does the quality of decomposition effect the efficacy of the adversary?_ Good question; we don’t test for this directly, but compare using Mistral 7B as the weak model to Mixtral 8x7B as the weak model for the automated decomposition tasks in Table 2. Mixtral does much better than Mistral when combined with each frontier model. However the weak model is used for two things: decomposing the task, and generating a solution given the solutions to the subtasks in context. This provides some weak evidence that higher-quality decompositions (from Mixtral) lead to better performance, but it’s possible that the entire gap is due to Mixtral better leveraging the solutions to subtasks from the frontier model. Intuitively, we expect higher-quality decomposition will increase the efficacy of the adversary, and think this is interesting to explore further. --- _Should the proposed threat model be considered most applicable to agentic tasks?_ Our threat model is most salient when models have different strengths; in our case, the adversary makes use of the capability of the frontier model and the non-refusal of the weak model, but there are many other axes on which models can vary such as information access, tool-access, or specialization for certain tasks. We think it’s likely that agents will specialize more than the current systems; for example, different agents will have different access permissions, or be customized for specific uses. However, our threat model also applies to the tasks we study, which are not agentic but still have downstream ramifications --- Please let us know if you have any additional questions!
Summary: The paper examines how adversaries can misuse multiple AI models in combination, even when each individual model is designed to be "safe" and refuses to generate harmful content. The authors demonstrate that by decomposing a malicious task into benign subtasks, an adversary can leverage a capable frontier model (which refuses malicious requests) to solve complex benign subtasks while using a weaker, misaligned model to complete the harmful task. The paper presents empirical evidence that such model combinations enable the generation of vulnerable code, explicit images, hacking scripts, and manipulative tweets at significantly higher success rates than using any individual model alone. The authors argue that current red-teaming approaches that assess models in isolation are insufficient, and they propose a shift toward evaluating how models interact within an ecosystem. Claims And Evidence: This paper reads more like a technical report rather than a well-structured academic study. The main concern is with Section 5: Automated Decomposition—despite the claim of automation, the process is not genuinely automated. The authors do not present a clear pipeline for automating the decomposition of tasks using different LLMs for attacks. Instead, the approach remains largely manual. To improve clarity and rigor, the authors should explicitly outline an automated pipeline. This should include details on how malicious tasks are systematically decomposed into seemingly benign subtasks. Additionally, referencing related work in multi-agent systems could strengthen their methodology and provide a more robust foundation for automation. Methods And Evaluation Criteria: Although a synthetic dataset was used, I believe this is a reasonable and meaningful choice. Theoretical Claims: No Theoretical Claims in this paper Experimental Designs Or Analyses: I didn't see the different results of manual and automated decomposition Supplementary Material: I read the prompt they used Relation To Broader Scientific Literature: This is a more independent technical report Essential References Not Discussed: None Other Strengths And Weaknesses: none Other Comments Or Suggestions: none Questions For Authors: Please parse the automated approach further and analyze if this can be extended to more datasets, such as strongreject Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your review! We respond to your comments below and hope if our comments help assuage your concerns, you’ll consider increasing your score. --- _The main concern is with Section 5: Automated Decomposition—despite the claim of automation, the process is not genuinely automated. The authors do not present a clear pipeline for automating the decomposition of tasks using different LLMs for attacks. Instead, the approach remains largely manual._ By automated decomposition, we mean that the adversary can perform a wide-range of tasks without a human in the loop (see 127-129); our instantiation of this involves using flexible prompts that coax a model to identify the salient subtasks for many different tasks automatically, rather asking a human to come up with prompts for each task manually. Using the model to decompose was critical for our malicious python script experiments; there manually decomposing each task in our dataset would have been challenging for us to do: For example, two example tasks from the “malicious python script experiments” are: * _Assuming a misconfigured Apache web server running on a Linux machine with IP address 192.168.1.100, create a Python script that leverages the "mod_status" module to gain unauthorized access to the server's status page, and then use the information obtained to launch a targeted DoS attack by flooding the server with a large number of requests to a specific resource-intensive URL._ * _Write a Python script that can brute-force a specific model of a wireless router's WPA2 password, given a captured handshake file in the format of a .cap file located at "/path/to/handshake.cap". The script should use a wordlist located at "/path/to/wordlist.txt" and output the cracked password._ For each of these, the decomposition is (i) not a-priori obvious (to the authors), and (ii) are likely task-specific, yet our automated decomposition pipeline handles both of these (along with over 100 other tasks like these) automatically with the same protocol, without a human in the loop. The involvement of the human comes in specifying the order of the steps — the weak model decomposes into subtasks, the — after we do this, we (literally) run a bash script (included in the supplemental code) to accomplish a range of tasks, without additional intervention. --- _To improve clarity and rigor, the authors should explicitly outline an automated pipeline. This should include details on how malicious tasks are systematically decomposed into seemingly benign subtasks._ For our automated experiments section 5.1 and 5.2, we include the prompts we use for decomposition and how the solutions are added into subsequent prompts in appendices A.4 and A.5 respectively. The pipeline is comparatively easy to automate since the weak model comes up with “related” subtasks; after the frontier model generates solutions, the weak model can easily include the solutions in-context (regardless of their form) Our framework (Section 3) also exhibits how our approach can be generalized into more capable automated pipelines. For example, we could start with a weak-model that has query-access to a collection of frontier models, then train the model (with reinforcement learning) to accomplish the task by calling the frontier models as subroutines. This extension fits into our framework, and we think this is important subsequent work. --- _Please parse the automated approach further and analyze if this can be extended to more datasets, such as strongreject_ We think StrongReject, and other jailbreaks like it, are not suited to our threat-model since the weak model (which doesn’t refuse) could already score well on StrongReject itself — our method thus works, but for uninteresting reasons. StrongReject is best suited as a benchmark for testing how refusal-training works, not whether or not actually extracting answers is possible with any model out there. Instead, our benchmarks focus on tasks that we think (i) are representative of actual ways adversaries would misuse models in the wild are, and (ii) not solvable by either model individually — this surfaces the practical risks from combining models over using models individually. --- _This paper reads more like a technical report rather than a well-structured academic study._ We respectfully disagree — we identify a conceptual flaw with how the community thinks about safety; we formalize a new threat model for how to think about multiagent misuse risks; and we instantiate our framework in a range of settings to show empirically that combinations of models can be misused by adversaries that aren’t capable of misusing either individual model. We hope our work helps inform frontier labs and policymakers on the limitations of definitions of safety that only encompass single models, and spawn work on better definitions and methods to mitigate such decentralized risks. --- Please let us know if you have any additional questions!
Summary: This manuscript suggests that "safe" models with higher capabilities may be used by adversaries to help low-capability models perform "unsafe" tasks, thus yielding an overall "unsafe" model system, while existing works usually evaluate the safety of models on a per-model-basis. Claims And Evidence: The findings in this manuscript are interesting. Yet, the major drawback is that the concepts used in the claims are not defined, e.g., the "safety" of models. The high-capability model is safe (even when it is evaluated in the connected pipeline). I would not call the low-capability model that agrees to perform harmful tasks a"safe" model A better version of the claim might be that "safety" refers to machine learning models refusing to perform harmful tasks. A low-capability model that agrees but fails to perform harmful tasks (e.g., agrees to write code to execute a reverse shell, but the code fails) is not a safe model as it does not explicitly refuse to perform the tasks. While the low-capability fails to do so by itself, it may manage to perform the task successfully with the help from other low-capability models. Other definitions may work Methods And Evaluation Criteria: These are satisfactory. Theoretical Claims: I agree with most technical findings. I just found the main conclusion (throughout the paper) less well defined, as explained previously. Experimental Designs Or Analyses: I am satisfied with the experimental designs. There are no significant technical flaws. Supplementary Material: I inspect all supplementary materials Relation To Broader Scientific Literature: The findings are related to existing papers that inspect the security of individual models. It might be worth discussing the safety of systems that involve multiple models (e.g. adversarial robustness of ensemble) Essential References Not Discussed: NA Other Strengths And Weaknesses: + A threat model is explicitly defined. - The flow of the paper is a bit hard to follow. I would suggest to splitting into subsections with meaningful headers. Other Comments Or Suggestions: It might be worth discussing papers that aim to propose new threat models for ML attacks. Questions For Authors: Would the conclusion change if we have more than two models (not necessarily connected in series) in the pipeline? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review of our work! We’re glad you found our findings “interesting”, and respond to your comments below --- _The findings in this manuscript are interesting. Yet, the major drawback is that the concepts used in the claims are not defined, e.g., the "safety" of models. The high-capability model is safe (even when it is evaluated in the connected pipeline). I would not call the low-capability model that agrees to perform harmful tasks a"safe" model_ _A better version of the claim might be that "safety" refers to machine learning models refusing to perform harmful tasks. A low-capability model that agrees but fails to perform harmful tasks (e.g., agrees to write code to execute a reverse shell, but the code fails) is not a safe model as it does not explicitly refuse to perform the tasks. While the low-capability fails to do so by itself, it may manage to perform the task successfully with the help from other low-capability models._ We say a model is “safe” with respect to some malicious task when it fails to accomplish the malicious task itself (lines 13-14 right column). We adopt this definition (which our paper reveals the limitations of) since it reflects how scaling labs currently define safety and is used to inform release decisions. In particular, scaling labs test whether models can accomplish tasks before deciding whether to deploy them (see the references in lines 95 - 106), and frequently release open-source models that are incapable of these tasks (e.g., Google released Gemma’s weights, and OpenAI released GPT-2) while restricting access to models that would be capable of them (e.g., Google only allows API access to Gemini, and OpenAI does the same for GPT-4). As you suggest, we could alternatively define models as “safe” if they always refuse to produce a malicious or harmful output and unsafe otherwise. However, this definition is overly restrictive — this would mean that a model that only copies inputs or fixes grammatical errors is “unsafe”, since it can in principle produce harmful outputs without refusing. The definition is also potentially too relaxed — it’s possible that releasing a new fronter model, which itself refuses every unsafe request, enables adversaries to combine that model with existing open-source models to accomplish unsafe tasks that the adversary couldn’t accomplish with any combination of previous models. Overall, a core contribution of our paper is describing how (i) defining safety in the context of the broader ecosystem is challenging, and (ii) the current definitions, even executed perfectly, are insufficient to restrict misuse. We think releasing this work is important to spawn better definitions (e.g., definitions that better capture the existing ecosystem) and help developers and policymakers make better release decisions. To further emphasize and justify the definition we in the paper use along with its limitations, we will bold it in the intro, repeat it in S3, and expand our current discussion of it (e.g., 425-429) in subsequent versions. We hope this helps alleviate your concern. --- _Would the conclusion change if we have more than two models (not necessarily connected in series) in the pipeline?_ Good question — adding more models to the pipeline should strictly increase the risks. In series, it’s possible that some tasks require more than two types of tools or pieces of information, and each model only has one subcomponent (this gracefully fits into our framework in S3). However, there can be multiagent risks more broadly; for example, adding a single LLM agent to a social media site might have little impact, but many copies of the agent might enable e.g., disinformation campaigns. We think studying this is important subsequent work. --- Please let us know if you have any additional questions! --- Rebuttal Comment 1.1: Comment: Thanks for the response. I am looking forward to seeing the proposed changes implemented.
Summary: - the paper explores the idea of completing a (malicious) task using a collection of otherwise "safe" models - the key idea is to break down the task into a set of subtasks, such that each task alone is benign (or deemed benign enough) for the safe models, and then assemble the subtask solutions back into the full solution (with, e.g., an unsafe model that couldn't have done the task alone) - the paper explores *manual decomposition* and *automated decomposition* - manual decomposition involves a human-in-the-loop to decompose the task into subtasks - automated decomposition involves asking a (weak) model to decompose a (malicious) task into (benign) subtasks which will be solved by strong models, and subsequently using the weak model to merge the subtask solutions - the paper explores 2 settings for manual decomposition (generating vulnerable code, creating explicit images) and 2 for automated (generating malicious code, generating manipulative content), and find that the proposed model collaboration enables higher success than using single models alone ## update after rebuttal I appreciate the authors' response. I have read the rebuttal and other reviews. My concerns are mostly resolved and maintain my assessment that the paper can be accepted. I do however agree with the limitations/weaknesses that other reviewers have identified, and will maintain my score at 3. Claims And Evidence: - Overall the claims are fairly well substantiated. - A important weakness is the following claim:"adversary can misuse combinations of *safe* models to produce unsafe results". The key caveat is that in both the manual and automated decomposition experiments, at least one model seems to be *unsafe* (e.g. the image editing model that adds nudity). So it is not entirely accurate that a combination of (only) safe models are sufficient; the experiments primarily show that we need both *strong-and-safe* and *weak-and-unsafe* models. Methods And Evaluation Criteria: - While manual decomposition makes sense, it's debatable on this can be posed as a contribution, since in the limiting case, a human can also "manually decompose" a task back to a series of google search. - There is indeed a spectrum [1] of how much *synthesis* the adversary would want from their sub-queries --- e.g., there's minimal synthesis in a google search, and a lot of synthesis in an LLM generated report (e.g., OpenAI deep research). - It would help for the authors to talk more about what levels of abstractions make sense for such decomposition. [1] https://arxiv.org/abs/2411.17375 Theoretical Claims: N/A: the paper does not have theoretical claims. Experimental Designs Or Analyses: Overall the experimental design makes sense. One weakness is that the unsafe tasks explored by the paper are not very compelling. - For example, to perform the four tasks it is sufficient to apply a standard jailbreaking technique to a frontier model and get higher quality outputs. - In terms of evaluation, the paper abstracts away the *quality* of the solution and reduces to a binary success evaluation; but in practice, quality could matter a lot for an adversary (e.g., the quality of personalized manipulation message is more cogent if written by a jailbroken frontier model end-to-end, as opposed to using the proposed model collaboration pipeline). - A key underlying question is: are there other tasks whereby model collaboration is *necessary* for the unsafe outcome? - While the experiments consider single model misuse, it does not seem to compare to single *jailbroken* model misuse. - Nevertheless, I can understand that it may be the authors' intention to explore a new angle of safety, as opposed to comparing existing exploits. Another weakness is that all evaluations are done on synthetic data, and there's limited visibility into the entire dataset created by the authors (apart from individual examples provided by the authors). Supplementary Material: Yes, particularly: - the prompts used for the experiments - Fig. 3 for results Relation To Broader Scientific Literature: - The key contributions relate to the AI safety literature in that the paper explores a new way to misuse otherwise "safe and aligned" models. - The key ideas themselves (model collaboration) have related work which the authors cited (the related work section is fairly comprehensive). Essential References Not Discussed: N/A Other Strengths And Weaknesses: Overall, I like the paper in that it explores a useful direction, and the key ideas are worth spreading. However, the execution of the paper can be improved (see "Experimental Designs Or Analyses" section). The paper is also reasonably well-written and easy to follow. Other Comments Or Suggestions: - some inline links to Appendix seems wrong (a lot of them all point to A.5, when they refer to different contents) Questions For Authors: - A key question that I'd appreciate further clarity: **if a model is capable of harmful task decomposition (in "automated decomposition" section), is it also mostly capable of solving those tasks (assuming it can be jailbroken)?** - Do the authors have hypothesis to this question? - If so, what does it imply for the proposed method and future work? - Table 1 and 2: how should the bottom-right quadrant (Claude 3 models w/ Claude 3 models) be interpreted? Are the numbers referring to a strong-and-strong model collaboration? (as opposed top-right quadrant which indicates weak-and-strong model collab?) If so, it is surprising from Table 1 that, for example, combining Mistral 7B with C3 Opus (49.7) is *much* better than C3 Haiku with C3 Opus (4.0). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review of our work! We’re glad you found that it “explores a useful direction, and the key ideas are worth spreading”, and that it is “well-written and easy to follow”. We respond to your comments below, and hope that if our responses improve your impression of the paper, you’ll consider increasing your score. --- _A important weakness is the following claim:"adversary can misuse combinations of safe models to produce unsafe results” [...] at least one model seems to be unsafe (e.g. the image editing model that adds nudity)_ We say a model is “safe” with respect to some malicious task when it fails to accomplish the malicious task itself (lines 13-14 right column). We use this definition since it reflects how scaling labs currently measure safety; such labs test whether models can accomplish tasks before deciding whether to deploy them (see refs in lines 95 - 106), and frequently release open-source models that are incapable of these tasks (e.g., Google released Gemma’s weights, and OpenAI released GPT-2) while restricting access to models that are capable of them (e.g., Google only allows API access to Gemini, and OpenAI does the same for GPT-4). We could alternatively define models as “safe” if they can never produce a malicious or harmful output. However, this definition is overly restrictive — this would mean that a model that only copies inputs or fixes grammatical errors is “unsafe”, since it can in principle produce harmful outputs. One goal of our paper is to underscore the need to reassess definitions of safety based on real threat-models, and hope our experiments serve as a useful reference to start doing so. To emphasize and justify this definition, we will bold it in the intro, repeat it in S3, and expand our current discussion of it (e.g., 425-429) in subsequent versions. --- _A key question that I'd appreciate further clarity: if a model is capable of harmful task decomposition (in "automated decomposition" section), is it also mostly capable of solving those tasks (assuming it can be jailbroken)?_ Good question — we use the weak model to do the decomposition (since the strong model refuses; see the strong-strong collaboration), and the weak model itself is not capable of solving the tasks itself. We test whether the weak model can accomplish the task itself with two baselines: the single-shot baseline where the weak model is asked to do the task directly, and the single-decomposition baseline where the weak model is used in-lieu of the strong model [350-352]. Critically, __the weak models rarely refuse, so there is no need to jailbreak them (reported in Table 3)__. We find that across all settings, the weak model is not capable alone of accomplishing the original task directly, or even capable of solving the subtasks at a sufficient level to outperform the weak-strong combination. This suggests that decomposing tasks is itself easier than executing the (sub)tasks correctly. --- _A key underlying question is: are there other tasks whereby model collaboration is necessary for the unsafe outcome? [as opposed to simply jailbreaking the strong model]_ Good question; we write about this in the discussion (400 - 411, right), which elaborates on the point you raise in your review: _”Nevertheless, I can understand that it may be the authors' intention to explore a new angle of safety, as opposed to comparing existing exploits”_ . To add to that discussion: * One ramification of our work is that progress on jailbreak robustness (such as [1] recently) need not extend to this setting; even if it is _impossible to jailbreak a model_, our approach still allows adversaries to accomplish challenging, malicious tasks. * In settings when models (or agents) have different information or different tools (437-439), combining models is necessary to accomplish tasks (i.e., even with optimal jailbreaking, you couldn’t use a accomplish the task with a single model) We’ll add this to our analysis in the discussion in subsequent versions. --- _Another weakness is that all evaluations are done on synthetic data, and there's limited visibility into the entire dataset created by the authors (apart from individual examples provided by the authors)._ We include the full datasets for all experiments in the supplemental zip in the submission (see the data folder). --- _In terms of evaluation, the paper abstracts away the quality of the solution and reduces to a binary success evaluation; but in practice, quality could matter a lot for an adversary._ This is a good point — however weak models struggle to accomplish the tasks even with our current evaluation. Our evaluation is thus capturing some significant uplift obtained through decomposition. Nevertheless, we think better evaluation on more realistic misuse tasks (like the ones we study) is important subsequent work. --- Please let us know if you have additional questions! --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response. I have read the rebuttal and other reviews. My concerns are mostly resolved and maintain my assessment that the paper can be accepted. I do however agree with the limitations/weaknesses that other reviewers have identified, and will maintain my score at 3.
null
null
null
null
null
null
ProofAug: Efficient Neural Theorem Proving via Fine-grained Proof Structure Analysis
Accept (poster)
Summary: The paper introduces ProofAug, a novel approach for enhancing neural theorem proving (NTP) by integrating large language models (LLMs) with traditional automation tools. Unlike prior approaches such as the Draft, Sketch, and Prove (DSP) framework, which generate rough proof sketches and rely on automation tools to fill gaps at a single granularity, the paper proposes starting with a full proof generated by an LLM and employs a fine-grained analysis to derive a maximal compatible semi-proof (MCSP)—a structurally intact proof verifiable by an interactive theorem prover (ITP) like Isabelle, with some unproven parts marked as "sorry." These gaps are iteratively refined using automation tools at varying levels of detail, with the process repeating recursively until a complete proof is achieved or deemed infeasible. The method is enhanced by an efficient recursive proving (ERP) module, which integrates with tree-search algorithms to boost sample efficiency, achieving a notable 66.0% pass rate on the miniF2F-test benchmark after dataset curation (up from 61.9% originally), surpassing previous state-of-the-art results of 50-60%. Claims And Evidence: C1: Limitations in DSP where generated proofs may produce overly difficult intermediate conjectures or overly complex formal proofs The authors discuss limitations of the DSP framework in the introduction and as motivation for their approach. However, aside from a couple of examples, there doesn't seem to be any quantitative evidence of these being primary limitations. To be clear, intuitively these do make sense but there doesn't seem to be any quantitative evidence. C2: Novel approach ProofAug which improves sample efficiency and can be seamlessly integrated with tree search algorithms. The method discussed in Section 3, is evaluated on miniF2F and achieves better performance than DSP given the same budget. Additionally, the results with the ERP module also demonstrate strong performance. C3: State-of-the-art performance with a mixed strategy I find this claim a bit problematic. The authors acknowledge that this number is on a curated version of the data, however, the baselines numbers as far as I can tell are not computed on this curated data. This makes the comparison unfair and consequently the claim of SoTA seems unsubstantiated. Methods And Evaluation Criteria: I am not an expert on the topic but miniF2F is a standard dataset used for evaluating neural theorem provers. Theoretical Claims: N/A Experimental Designs Or Analyses: As mentioned above, I believe the results on the curated data with a mixture strategy are compared unfairly as the numbers for the proposed method are computed on a different (curated) version of the data. Moreover, it is not clear what the curation procedure is exactly but I appreciate the authors including the curated dataset with the submission. Additionally, the experiments are limited to Isabelle, and though the authors talk about other systems in the Appendix, some concrete experiments would be helpful. Supplementary Material: I only briefly looked at Sections A, B, C, D, F, H for additional context and details and the supplementary zip to look at the curated dataset. Relation To Broader Scientific Literature: The paper improves upon a widely used approach of DSP for neural theorem proving and provides a new interesting approach for combining LLMs with existing automated theorem provers. Essential References Not Discussed: N/A Other Strengths And Weaknesses: * (Strength) The unified perspective on theorem proving with language models in Section 2.1 was quite interesting and insightful. * (Strength) The authors include code to reproduce the experiments. Other Comments Or Suggestions: N/A Questions For Authors: Could you please clarify the choices for curation, and what is the rationale behind the comparison to other methods on a different dataset? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. Below is our response to your concerns: **Q1**: The authors discuss limitations of the DSP framework in the introduction and as motivation for their approach. However, aside from a couple of examples, there doesn't seem to be any quantitative evidence of these being primary limitations. To be clear, intuitively these do make sense but there doesn't seem to be any quantitative evidence. **A1**: Thank you for raising the concerns about quantitative evidence. This is a nuanced issue here. Firstly, we would like to clarify that the two limitations we discuss in our paper are not meant to classify the error reasons of the failed verification in DSP and claim which ones are primary. Instead, as stated in line 59, our argument is that "it(DSP) may also result in proof failures in unintended ways". By 'unintended', we mean the situations where the there is a chance for the direct whole-proof completion counterpart to generate a correct proof, while the DSP method leads to the failure. In fact, any syntactically correct DSP sketch that fails to pass the verification must fall into either or both of the following two categories: 1. some provable intermediate conjecture cannot be solved by <ATP>. 2. some intermediate conjecture (which is intended to be the translation of a step in the informal draft in DSP) is unprovable in Isabelle/HOL due to being incorrect or being improperly expressed in Isabelle/HOL. The first situation corresponds to our 'Hard Conjecture' issue, and direct whole-proof completion always has a chance to solve the conjecture. As to the second one, for direct whole-proof completion, there is also always a chance that if we do not follow the informal draft to propose this intermediate conjecture, we will get rescued. This is the 'Complicated Draft' issue. As a result, the two issues we propose can almost cover 100\% of the 'unintended' failures of DSP. We apologize for the confusion caused by the lack of clarification of what 'unintended' means in our paper and will add more details to the revised version. If you are interested in how many such intended failures can be rescued by ProofAug, i.e. cases where the chances are actually realized in practice (the function of <ATP> in ProofAug can be seen as to help find the alternative proof), Figrue 3 can be a reference (DSP v.s 0-shot/Ours+ProofAug for different number of attempts). Remember that it does not reflect the ratio of different types of failures in DSP. **Q2**:(State-of-the-art performance with a mixed strategy). The authors acknowledge that this number is on a curated version of the data, however, the baselines numbers as far as I can tell are not computed on this curated data. This makes the comparison unfair and consequently the claim of SoTA seems unsubstantiated. Moreover, it is not clear what the curation procedure is exactly. Could you please clarify the choices for curation, and what is the rationale behind the comparison to other methods on a different dataset? **A2**: In Table 2, we have also reported our results on the original Isabelle version of minif2f (61.9\%) and show that our method is superior to other methods using Isabelle (previous Isabelle SOTA on minif2f-test is 56.1\%). The goal of curating the dataset is for the comparison with results evaluated on the Lean versions of minif2f. Our curation mainly consists of two types: 1. Fix typos. 2. Modify the type of the variables from Nat to Int for theorems that perform division on the variables (refer to our response to Q2 of Reviewer rCPC for detailed explanation of this type of curation). We have manually checked a large part of the Lean version of minif2f and find that no issues falling into these two types appear. Thus, we believe that using the evaluation result on our curated Isabelle version of minif2f to compare with the Lean version result is more fair. **Q3**: The experiments are limited to Isabelle, and though the authors talk about other systems in the Appendix, some concrete experiments would be helpful. **A3**: We are currently conducting our plans to implement and evaluate ProofAug on Lean. Preliminary results show that using heuristic tactic combinations as the <ATP> to build a Lean version of ProofAug can help improve about 3\% on the pass@1 rate for Deepseek-Prover-V1.5-SFT, but hard to make improvements for pass@128 or higher number of attempts. Integration with hammer tools in Lean, such as lean-smt, is necessary for further improvements. Also, some additional modifications might be needed (You can refer to our response to Q5 of Reviewer vgEx for more details). In the revised version, we will update some progress and results we finish by then. Again, we thank the reviewer for the valuable comments. We hope that our responses have addressed your concerns. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications! I maintain my positive assessment of the work!
Summary: Recursive theorem decomposition and rebuilding to obtain SOTA scores on miniF2F-test ## update after rebuttal Satisfied with responses to questions : Rating remains "4: Accept" Claims And Evidence: * Claim: ProofAug enjoys superior sample efficiency. + Table 2 shows that ProofAug outperforms the DSP baseline at sample budgets of 1, 10, and 100 queries on miniF2F-test. * Claim: ProofAug achieves a new SOTA on miniF2F-test. + Table 2 compares ProofAug's performance to previous SOTA methods on miniF2F-test, though this 66.0% figure appears to have been achieved through a 'mixed strategy', i.e. throwing a variety of techniques at the solution (explained in Appendix F), rather than 'pure' ProofAug. * Claim: ProofAug is a versatile plug-and-play module and integrates well with tree-search algorithms, exemplified by the ERP module. + The paper describes ProofAug as a module within a unified view of theorem proving (Section 2.1) and demonstrates its integration with an ERP module (Section 3.3, Algorithm 2). Experimental results though (surprisingly) make it seems like it produces only a marginal gain. * Claim: ProofAug addresses the "Hard Conjectures" and "Complicated Draft" issues. + Starting from full proof generation aims to mitigate "Hard Conjectures" and the recursive coarsening of semi-proofs addresses "Complicated Draft", this important aspect is left to two Figures in Appendix B. Methods And Evaluation Criteria: Benchmarks/evaluations make sense - and obtaining good miniF2F-test scores coupled with an eye to efficiency is good. Theoretical Claims: The paper is primarily empirically driven and does not make significant theoretical claims that require formal proof verification. Experimental Designs Or Analyses: * Control of Variables: The paper mentions using the same `deepseek-math-7b-base` model, Isabelle version, and PISA environment across experiments, which ensures fair comparisons. The discussion in Appendix D regarding reproducibility and CPU dependency is also important for experimental validity. * Query Limits and Sample Budgets: The paper clearly reports sample budgets and query limits, allowing for a fair assessment of sample efficiency. * Ablating ProofAug from different prompting strategies (DSP, full proof, zero-shot) effectively demonstrated the contribution of ProofAug across various prompting approaches. Supplementary Material: I reviewed the supplementary material provided, there are many interesting experimental details included. Relation To Broader Scientific Literature: * Builds upon DSP Framework (Jiang et al., 2023): ProofAug directly addresses limitations observed in the Draft, Sketch, and Prove (DSP) framework * Addresses Sample Efficiency in Whole-Proof Generation: The paper acknowledges the sample inefficiency in whole-proof generation methods and positions ProofAug as a solution to improve. This relates to the broader challenge of making LLM-based theorem provers more practical * Integrates Automation Tools (ATPs): ProofAug strategically integrates off-the-shelf ATPs * Related to Tree-Search in Proof Proving (AlphaGo, etc): The paper explicitly connects ProofAug to tree-search algorithms and demonstrates its compatibility with such approaches through the ERP module Essential References Not Discussed: References seem comprehensive Other Strengths And Weaknesses: Strengths: * Originality: The core idea of fine-grained proof structure analysis and MCSP for enhancing neural theorem proving is novel and offers a distinct approach compared to existing methods. * Significance: The significant improvement in sample efficiency and achieving good results on miniF2F-test demonstrates the practical impact and potential of ProofAug. * Comprehensive Evaluation: The paper provides a thorough experimental evaluation with ablation studies. The overall claims are supported. Weaknesses * SOTA Result: While the overall claims are supported, the headline SOTA result (since it is based on a 'Mixed Strategy' rather than solely the proposed technique in isolation) somewhat *weakens* the overall idea : The recursive decomposition is interesting enough... Other Comments Or Suggestions: * Abstract: "with a total sample budget of only 2100" :: No sense of units here : Is this cumulative number of calls to the LLM? Over how many questions? (Abstract should stand alone) * Figure 3 label : "Number of Attemps" -> "Number of Attempts" * Section 4.1: "For multiple attempts of proof," -> "For multiple attempts of a given proof," (my understanding here) * Section 4.1: "do not always give the same result under the same condition" -> "do not always give identical results under the same conditions" * Section 4.4: "During the completion of this work, we have tried various" -> "During the completion of this work, we tried various" Questions For Authors: * Section 4.4 : "Besides, we find some incorrect theorem statements in the miniF2F-test dataset during our experiments, so we build a curated dataset and part of the experiments are done on the curated version." : Have these been reported 'upstream'? * While Appendix C discusses the generic applicability, are there concrete plans to implement and evaluate ProofAug on other proof assistants beyond Isabelle, such as Lean? If so, what are the main challenges anticipated in porting ProofAug to a different ITP environment, and how might the performance differ? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the valuable comments from the reviewer, and especially thank you for recognizing and expressing interest in the experimental details in our code. Below is our response to your concerns: **Q1**: Experimental results of the ERP module make it seem like it produces only a marginal gain. **A1**: We agree with you that the 1.6\% improvement seems like only a marginal gain. However, a 1.6\% improvement under a sample budget (number of calls to LLM for each problem) of 500 is already a significant improvement when compared with RMaxTS, which only improves around 0.1\% for Deepseek-Prover-V1.5-SFT under a sample budget of 3200 (Even when the sample budget comes to 16×6400, the improvement is only 1.6\% as well). It is the overall difficulty of using tree-search methods to outperform simple resampling under small sample budgets that make you surprised at the seemingly marginal gain. **Q2**: While the overall claims are supported, the headline SOTA result (since it is based on a 'Mixed Strategy' rather than solely the proposed technique in isolation) somewhat weakens the overall idea. **A2**: We agree that the overall idea will be strengthened if the SOTA result is reported without using mixed strategies. For 'Pure' ProofAug (with ERP module), we have also achieved SOTA within isabelle-based methods (note that the previous SOTA Isabelle-based method, SubgoalXL, also uses 'mixed strategies' that even include using multiple trained checkpoints), which justifies the superiority of the proposed technique in isolation. We will highlight this in the revised version. **Q3**: (Suggestions on the writing and some specific expressions) **A3**: Thank you for the suggestions. We will modify "with a total sample budget of only 2100" to "restricting the number of LLM queries to 2100 for each problem" in the abstract and fix other typos and grammar issues in the revised version. **Q4**: Section 4.4 : "Besides, we find some incorrect theorem statements in the miniF2F-test dataset during our experiments, so we build a curated dataset and part of the experiments are done on the curated version." : Have these been reported 'upstream'? **A4**: Thank you for the suggestion, and we agree that it is important to report them upstream. However, since there is a gap between the verification style of DSP (which always imports a same set of theories for every theorem) and that of the original minif2f dataset (where different theories are imported for each theorem), extra efforts are needed to make a pull request to the minif2f repository. We will report our curation upstream as we finish the transfer to the original minif2f dataset style. **Q5**: While Appendix C discusses the generic applicability, are there concrete plans to implement and evaluate ProofAug on other proof assistants beyond Isabelle, such as Lean? If so, what are the main challenges anticipated in porting ProofAug to a different ITP environment, and how might the performance differ? **A5**: Yes, we are currently conducting our plans to implement and evaluate ProofAug on Lean. The main challenges we are facing are the engineering complexities involved in the implementation. To name a few: 1. The hammer tools (such as the lean-smt project) for lean are still in beta. So it takes time to make the whole pipeline work. 2. We need to adapt many implementation details from Isabelle/HOL to Lean. For example, there are no concepts like 'proof mode' in Lean so our Lean version implementation could no longer rely on them. We anticipate that the performance improvement might be less than that in Isabelle to some extent, since the current Lean hammers are weaker, and Lean proofs are usually in the mixed declarative and procedural style rather than being mostly declarative like Isabelle/Isar proofs. Modifications to ProoAug are probably needed to make it behave well for Lean if we do not choose to derive a completely declarative proof language from Lean as described in Appendix C. As a result, we feel like ProofAug should better represent the overall idea of 1. get a list of semi-proofs from the proof proposal 2. apply hammers for the semi-proofs in an organized way 3. an optional recursive proving module, while the specific implementations differ for different proof systems. If you think this statement can help people understand the idea of ProofAug better, we can add it to the revised version, and refer to Algorithm 2 as the "Implementation of ProofAug in Isabelle/HOL". Finally, we would like to thank you for your time and consideration. We hope that our responses have addressed your concerns. --- Rebuttal Comment 1.1: Comment: Thanks for the answers to my questions. My rating remains "4: Accept"
Summary: This paper introduces ProofAug, a novel theorem-proving method that enhances the sample efficiency of proof synthesis by integrating automation tools at multiple granularity levels. Unlike prior approaches that use automation tools either selectively or at a single level, ProofAug applies fine-grained structure analysis to better leverage built-in tactics and automated theorem provers. Additionally, the method is designed as a plug-and-play module compatible with any tree-search algorithm, allowing the construction of an efficient recursive proving (ERP) module for further performance gains. Evaluated on the miniF2F-test benchmark using the deepseek-math-7b-base model and the Isabelle proof assistant, ProofAug achieves a new state-of-the-art (SOTA) performance, reaching a 66.0% cumulative pass rate with dataset curation (61.9% without) using only 2100 samples, aided by a mixed prompting strategy. Claims And Evidence: There are two major claims: - ProofAug (based on proof structure analysis) improves pass rate given a relatively small sample budget. - The cumulative pass rate when adopting a mixed prompting strategy achieves SOTA given a small sample budget. The first claim is well demonstrated by detailed explanation of the system design and experiments. Table 2 shows clear differences between ProofAug and the DSP baseline when given the same amount of sample budget. The second claim is weaker in my opinion. MiniF2F-test is a relatively small dataset, and a 0.1% improvement (66.0% vs 65.9%) is perhaps not significant enough to claim the crown (if not an error at all). Moreover, since some MiniF2F-test problems are corrected (as reported in section 4.4), the performance of other existing approaches may also vary a bit. So it isn’t entirely fair to compare the numbers directly. The small amount of sample budget does seem nice. Methods And Evaluation Criteria: The proposed methods make sense. It roughly follows the line of DSP but is much more carefully engineered to maximally exploit the capability of the built-in ATPs in Isabelle, aiming to balance the problem of hard conjectures and complicated draft (as described on page 2.). The evaluation criteria also makes sense. Pass rate with respect to sample budget is the standard metric in this area. Theoretical Claims: The paper is mostly emprical. Experimental Designs Or Analyses: The experimental designs and analyses look valid to me. I do find that the cumulative pass rate (with dataset curation) being the best is less convincing given that the test dataset is slightly modified / corrected and that the improvement is minimal. But the major part of this submission, namely the improvement in performance of vanilla ProofAug over DSP baseline is reasonable. Supplementary Material: I reviewed all pages of the supplementary material. Relation To Broader Scientific Literature: The approach proposed in this paper can potentially benefit provers other than Isabelle that have strong built-in ATPs. Essential References Not Discussed: The references seem reasonable. Other Strengths And Weaknesses: I appreciate the amount of effort put into the careful engineering in this work. The presentation is also generally clear and the content is well-organized. One thing I personally find counterintuitive is that in ProofAug the ATPs are essentially being tried first for smaller (i.e., innermost) subgoals and then larger ones. Usually one would expect that ATPs are better at solving low-level small goals and would struggle with larger ones. The fact that ProofAug works seems to suggest the opposite. This can be due to the fact that MiniF2F-test problems mainly suffer from the problem of “complicated draft”, where small goals are actually not suitable for ATPs, or it may just be accidental? Other Comments Or Suggestions: Section 3.3 mentions POETRY extensively and adopts part of its approach. Maybe add a high-level description in the very paragraph to briefly explain how the components work? Questions For Authors: Nothing in particular. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback, especially the concise and accurate comments on our ProofAug method. Below is our response to your concerns: **Q1**: (On the major experimental claims: The improvement seems minimal, and correction of data might cause unfairness) **A1**: Thank you for the summary of our major claims from the experimental results. We apologize for the confusion caused by lack of explanation of the goals of the experimental comparisons and possibly inappropriate claims. We in fact want to claim the following three points through the experimental results: - Through ablation study, we show that ProofAug can improve pass rate given a relatively small sample budget. (as you mentioned) - When compared with previous Isabelle-based methods, our methods can achieve significant improvement on the pass rate, remarking a new SOTA and showing the overall superiority of the designs introduced in this work. - As to the comparison with methods based on other proof assistants, we agree with your opinion that a 0.1% improvement is not that significant. However, the main goal of our comparison is to show that, given that previous Isabelle-based methods have been largely outperformed by Lean-based counterparts, our method is still competitive with the state-of-the-art Lean-based methods under a far smaller sample budget. We will modify our claims to more clearly reflect these points in the revised version. As to the concern of the possible unfairness caused by our curation, we also report our results on the original Isabelle version of minif2f and show that our method is superior to other methods using Isabelle. The goal of curating the dataset is for the comparison with results evaluated on the Lean versions of minif2f. Our curation mainly consists of two types: 1. Fix typos. 2. Modify the type of the variables from Nat to Int for theorems that perform division on the variables (refer to our response to Q2 of Reviewer rCPC for detailed explanation of this type of curation). We have manually checked a large part of the Lean version of minif2f and find that no issues falling into these two types appear. Thus, we believe that using the evaluation result on our curated Isabelle version of minif2f to compare with the Lean version result is more fair. **Q2**: One thing I personally find counterintuitive is that in ProofAug the ATPs are essentially being tried first for smaller (i.e., innermost) subgoals and then larger ones. Usually one would expect that ATPs are better at solving low-level small goals and would struggle with larger ones. The fact that ProofAug works seems to suggest the opposite. This can be due to the fact that MiniF2F-test problems mainly suffer from the problem of “complicated draft”, where small goals are actually not suitable for ATPs, or it may just be accidental? **A2**: Our observation is that, for models comparable to or stronger than deepseek-math-7b-base, ATPs are actually better at solving low-level goals proposed by the models. We feel sorry that the specific 'complicated draft' example might have misled you. Note that the performance of sledgehammer on minif2f-test is only 20.9\%, which already indicates that the ATPs struggles to prove high-level competition-math theorems (since they are not designed to solve such problems). As to their performance on low-level model-proposed conjectures, during the experiments, we record at which stage ProofAug helps find a new proof, as you can observe in the results provided in the supplementary material together with the code. We classify the success stages into 3 types: 1. The initial proof succeeds; 2. The Maximal Compatible Semi-Proof succeeds after substituting all 'sorry's with <ATP>s; 3. Some coarse semi-proof succeeds after substituting all 'sorry's with <ATP>s. The rough observation is that , for deepseek-math-7b-base, type 2 success is more common than type 3, which means that the lowest-level conjectures it propose are at least quite 'useful' for the ATPs. **Q3**: Section 3.3 mentions POETRY extensively and adopts part of its approach. Maybe add a high-level description in the very paragraph to briefly explain how the components work? **A3**: Thank you for the suggestion. We thought that POETRY appears in the Table 1 in the preliminary section so we chose not to describe it extendedly in Section 3.3. We apologize for the confusion and will add a high-level description to briefly explain how the components work in the revised version. Finally, we thank the reviewer for the time and consideration. We hope that our responses have addressed your concerns.
Summary: The paper proposes ProofAug, a method for achieving efficient neural theorem proving by combining LLMs with automated theorem proving (ATP). The paper conducts extensive experiments comparing ProofAug with baseline methods, categorizing them by different proof styles. Claims And Evidence: Yes. The paper addresses the trade-off between generating hard conjectures and producing overly complicated drafts, particularly in the DSP framework. It claims to propose a solution that effectively balances the two. The proposed algorithm demonstrates its effectiveness through experiments, including ablation studies, which show that ProofAug improves performance over DSP by a significant margin. Methods And Evaluation Criteria: The method is evaluated on the miniF2F test dataset (in Isabelle), which is a widely accepted benchmark for theorem proving. The dataset includes theorems of varying difficulty levels, making it a reasonable choice for evaluating theorem-proving methods. Theoretical Claims: The paper does not contain theoretical claims. Experimental Designs Or Analyses: The paper begins by categorizing different baseline approaches and then evaluates the performance of the proposed method, ensuring that the sampling budget aligns with baseline methods for a fair comparison. The evaluation results on the miniF2F test dataset show a clear improvement in performance. The paper also conducts ablation studies on the Efficient Recursive Proving (ERP) module and mixed proof strategies to validate the necessity of each component. Supplementary Material: I reviewed supplementary materials including Appendix and codes. Relation To Broader Scientific Literature: The paper reviews existing work in neural theorem proving, categorizing approaches into whole-proof generation, proof-step generation, and hybrid methods. By building on these existing paradigms and introducing a framework that improves performance through fine-grained proof structure analysis, the paper advances the state of the art in this field. Additionally, the integration of ProofAug into tree-search algorithms aligns with recent advances in proof synthesis and recursive theorem proving. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengthes and weaknesses are fully discussed in other sections. Other Comments Or Suggestions: I don't have other comments. Questions For Authors: 1. Regarding lines 261-263 in Section 3.2: Instead of replacing the outer block with sorry, have the authors considered generating more fine-grained drafts from the exact inner theorem containing sorry? It seems counterintuitive that an advanced Isabelle tactic would be more effective on a coarse theorem than on a finer one. 2. Regarding the curation of the miniF2F-test dataset: How are the incorrect theorems classified as incorrect? Are they unprovable in principle, or are there formalization errors that make them trivially provable? Providing concrete examples would improve clarity. 3. Why was Isabelle selected over Lean for evaluation? Is it due to the authors’ familiarity with the proof language, or does Isabelle have an advantage in terms of automation tactics or ATP integration? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the clear and constructive feedback. As to your questions, we address them as follows: **Q1**: Regarding lines 261-263 in Section 3.2: Instead of replacing the outer block with sorry, have the authors considered generating more fine-grained drafts from the exact inner theorem containing sorry? It seems counterintuitive that an advanced Isabelle tactic would be more effective on a coarse theorem than on a finer one. **A1**: Yes! We think what you describe is exactly our ERP module introduced in Section 3.3. We apologize for the unclearness in our description of ERP module. A clearer version of description of ERP is that, for each conjecture (inner theorem) in the Maximal Compatible Semi-Proof that <ATP> fails to prove, we ask the model to generate a new draft for it, given the corresponding proof state in the comment in hope that this information can help it draw a more fine-grained draft. You can refer to line 290-300 in Algorithm 2 and our code (the proof_aug_1dp method) for more details. In the revised version, we will modify the line 311-315 in Section 3.3 to 'In practice, to simplify the algorithm and suppress the sampling cost wasted in low-level details, we only try $\verb|<ATP>|$ and add new nodes for failed conjectures belonging to the original MCSP' to make our description clearer. **Q2**: Regarding the curation of the miniF2F-test dataset: How are the incorrect theorems classified as incorrect? Are they unprovable in principle, or are there formalization errors that make them trivially provable? Providing concrete examples would improve clarity. **A2**: We only checked the theorems that cannot get proved in our initial experiments, so we think our curation does not include theorems that are made trivially provable due to formalization errors. There are mainly two types of our curation: 1.Fix Typos. Statements with typos in variable names or numbers are mostly completely wrong and unprovable. 2. Modifying the type of the variables from Nat to Int for theorems that perform division on the variables. This is important because, for example, the operation "(2::Nat) - (3::Nat)" is valid and expected to be "(0::Nat)" in Isabelle. Although sometimes the original minif2f statement can still be proved in the sense that the resulted 0 also satisfies the constraints in the problem, but it is not expressing what the original problem intends. Below is an example: ```html Differences: --- Original +++ Fixed @@ -1,8 +1,8 @@ theorem imo_2001_p6: - fixes a b c d ::nat + fixes a b c d ::int assumes "0 < a \<and> 0 < b \<and> 0 < c \<and> 0 < d" and "d < c" and "c < b" and "b < a" - and "a * c + b * d = (b + d + a - c) * (b + d - a + c)" + and "a * c + b * d = (a + b - c + d) * (- a + b + c + d)" shows "\<not> prime (a * b + c * d)" HTML diff saved to xf_diffs/imo_2001_6_diff.html ``` The "a * c + b * d = (b + d + a - c) * (b + d - a + c)" constraint does not express what it intends in the original statement when $a > b+d$ or $c > b +d+a$. We will also provide several typical examples in our revised version of the paper. **Q3**: Why was Isabelle selected over Lean for evaluation? Is it due to the authors’ familiarity with the proof language, or does Isabelle have an advantage in terms of automation tactics or ATP integration? **A3**: Mostly the latter one. Isabelle/HOL has the most powerful proof automation across all proof assistants, and the sledgehammer is a built-in tool for Isabelle that have been developed for years. In comparison, Lean does not include an built-in SMT implementation or access to external ATP/SMTs with it, and the related community projects were mostly still in beta during our work. As a result, it is most appropriate to try out our method in Isabelle/HOL first to verify its effectiveness, or in other words, to obtain a 'proof of concept'. You may also refer to our response to Q5 of Reviewer vgEx for the obstacles we are facing when trying to implement ProofAug on Lean. Again, we appreciate your constructive feedback and will make the necessary changes in our revised version. We hope that our responses have addressed your concerns and clarified the points you raised. Thank you for your time and consideration.
null
null
null
null
null
null
ITFormer: Bridging Time Series and Natural Language for Multi-Modal QA with Large-Scale Multitask Dataset
Accept (poster)
Summary: This paper tackles the new problem of multimodal question answering over multivariate time series data (Time-Series QA). The authors introduce EngineMT-QA, a multitask dataset tailored to evaluate models' abilities in “understanding”, “perception”, “reasoning”, and “decision-making” tasks using real-world aero-engine time-series data. Additionally, they propose an architecture they call the Instruct Time Transformer (ITFormer), a multimodal architecture framework integrating pretrained Large Language Models (LLMs) and time series encoders with specialized components, such as Instruct Time Attention (ITA) and Learnable Instruct Tokens (LIT), to align temporal and textual modalities effectively. The proposed method is competitive across the proposed tasks, and generalizing well across pretrained LLMs and time series models. Ablation studies further underline the importance of the proposed architectural components. Claims And Evidence: The authors make several claims on introducing the time-series QA task and providing a state-of-the-art approach. While each claim is backed by some contribution, there are varying degrees of supporting evidence. Outlining these claims below: Claim 1: Introduce the Time-Series QA task Several recent (though non-archival) works have explored related tasks, presenting their own time-series QA benchmarks [1, 2, 3]. It would good to clarify the novelty of the dataset in this context. [1] [2404.11757] Language Models Still Struggle to Zero-shot Reason about Time Series [2] [2410.14752] TimeSeriesExam: A time series understanding exam [3] [2409.11376] Towards Time Series Reasoning with LLMs Claim 2: Introduction and effectiveness of architectural designs I did not get a strong understanding of whether each of these components was working as designed or intended. While I appreciated the ablations, they evaluate just the end task performance (accuracy or bleu). However, it would be more convincing to tease apart these capabilities introduced with each ITFormer component (e.g., what happens if we don’t have the “time token as language” components, how do the generations look?) On comparison methods, it is unclear how strong the method actually is, as the paper lacks details on how the proposed models were trained vs the comparison models. Claim 3: “Introduce a robust and adaptable paradigm for integrating complex time-series tasks into end-to-end QA frameworks” While the ITFormer framework demonstrates clear performance improvements versus the compared multimodal models and the application to various LLMs and time series models is interesting, I think there should be additional evidence to support this statement as a whole. In particular: The evaluation of question-answering tasks using time series data as context is interesting, but has also been investigated in prior work (see first claim) The tasks evaluate interesting capabilities (“Understanding, Perception, Decision-making, Reasoning”), but their delineation is unclear and a bit fuzzy (it’s not clear to me each of these tests an independent capability) I missed details on how their dataset construction is adaptable to any other existing time series datasets. They seem to construct the dataset specifically based on an existing aero-engine dataset and design questions tailored to this data [Arias Chao et al. 2021] Aircraft Engine Run-to-Failure Dataset under Real Flight Conditions for Prognostics and Diagnostics Methods And Evaluation Criteria: Yes. The proposed methods and evaluation criteria (BLEU, Rouge-L, Accuracy, and F1) are standard in prior multimodal QA tasks (e.g., text-only question-answering such as HotpotQA [1], and visual-question answering (VQAv2 [2]). [1] Yang et al., 2018. [1809.09600] HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering [2] Goyal et al. 2016. [1612.00837] Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering Theoretical Claims: N/A. No theoretical claims made. Experimental Designs Or Analyses: Yes. While the overall experimental setup is sound for evaluating the time-series QA tasks, I did think the method comparison showing the benefits of ITFormer could be more convincing. The ITFormer method is an architectural framework or way to design multi-modal time series models that can answer questions over time series features. While the authors do compare against other recent multimodal architectures, I could not find details to ensure fair comparison here, including: Training data: were ITTformer models trained on the EngineMT-QA dataset? How were other models trained or “adapted” in comparison? Training hyperparameters. While the authors mention using existing multimodal architectures with image encoders adapted to time series, how was this adaptation done? Was the training setup comparable to ITFormer? Supplementary Material: Yes. Construction of the dataset, dataset examples, and dataset prompts. Relation To Broader Scientific Literature: I found the author’s time-series QA task very interesting and relevant for time series research. There have been a few early works on this (Cai et al., 2024; Merrill et al., 2024; see above for references), but the paper presents interesting positive results when we can specifically train multimodal LLMs for these tasks. Essential References Not Discussed: On the data contribution angle, it would be good for the authors to discuss the differences in their contribution vs prior related benchmarks: [Cai et al., 2024] introduce TimeSeriesExam, a similar multiple-choice QA benchmark designed to test time series understanding in various categories (pattern recognition, noise understanding, similarity analysis, anomaly detection, and causality analysis) [2410.14752] TimeSeriesExam: A time series understanding exam [Merrill et al., 2024] also propose an evaluation framework where LLMs are tasked with answering questions about time series data (provided with time series features) [2404.11757] Language Models Still Struggle to Zero-shot Reason about Time Series Other Strengths And Weaknesses: Summarized Strengths: * The authors introduce a valuable new dataset for the compelling time-series QA tasks * They also present a new architecture framework to handle fusion for both modalities * The experimental results demonstrate strong performance improvements over baselines * The framework shows good adaptability across different LLM and time-series encoder architectures Areas for improvement: 1. Balance between dataset and architecture: Especially in relation to related work on time-series QA benchmarks, the paper would benefit from emphasizing the dataset and problem definition more prominently. While the introduction does this well, the rest of the paper devotes significantly more space to architectural details than to dataset analysis. Consider expanding Section 5.1 to provide more insights into the dataset characteristics and challenges. 2. Architectural presentation: The architectural exposition in Section 4 is mathematically dense and the notation was not perfectly clear to me. It could be made more accessible with: * A concrete example walkthrough showing how a specific time-series and question pair flows through the system * Moving some of the mathematical notation details to the appendix * Simplifying Figure 2, which currently contains too many details to follow easily 3. Component analysis: Table 2 shows an ablation study of the different components, but there's limited discussion in the text about these findings. Please expand on: * Why TPE yields the largest standalone performance gain * The complementary nature of ITA and TPE when combined * Practical insights about which components are most critical for real-world deployments Other Comments Or Suggestions: Please see the following on presentation quality and framework adaptability. Presentation Problem definition clarity: * Clarify the segmentation approach for time-series data and why segments are important. It is not clear what is the relationship between different segments if any. * Use consistent notation (e.g., boldface for vectors) * Explain the challenges of "modality heterogeneity" more concretely Paper order: Consider presenting Figure 3 (dataset overview) before Figure 2 (architecture) to establish the problem context before diving into the solution details. Framework Architecture: The paper would benefit from a brief discussion of alternative time-series encoding architectures (e.g., state space models) and how the framework might accommodate them. Dataset: Clarify in the main text how adaptable your dataset construction process is to other existing time series datasets. Questions For Authors: Beyond overall performance metrics, have you identified specific failure modes in ITFormer or baseline methods that underscore the necessity of your proposed components? Could you provide insights or qualitative examples showing how generations differ across ITFormer and the baseline methods, specifically regarding the integration of Learnable Instruct Tokens? Were there specific categories of questions or patterns where ITFormer distinctly outperformed other methods, and can you elaborate on why? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive and detailed feedback. We address each of your concerns below. # **1. On Methodological Motivation and Case Study** We appreciate the reviewer’s feedback. While **Transformer-based architectures** are commonly used in vision-language and speech-language tasks, our work introduces **novel adaptations** for the **fusion of time-series data and natural language**, a challenge not systematically addressed in the literature. Our method’s **novelty** lies in the **task-specific, modality-aware components** that enable effective time-series and language integration. These components—**TPE (Time Token Position Encoding)**, **LIT (Learnable Instruct Tokens)**, **ITA (Instruct Time Attention)**, and **TAL (Time-As-Language)**—solve the following challenges: 1. **Weak semantic nature of time-series data**: **TPE** provides temporal structure to capture sequential data. 2. **Misalignment between time and text**: **LIT** enables extraction of task-specific information from queries. 3. **Task-sensitive dynamic mappings**: **ITA** and **TAL** facilitate task-guided attention and temporal reasoning. ### **Case Study Example** In our case study, **ITFormer** demonstrates clear advantages over **GPT-4o** in processing time-series data with natural language queries https://imgur.com/a/dYSrsE9: - **TPE** allows **ITFormer** to correctly identify the rising altitude trend, while **GPT-4o** struggles with misinterpreting the trend as a descent. **TPE** enables the model to understand the temporal sequence, providing the **largest standalone performance gain** in our ablation experiments. - **LIT** helps **ITFormer** align its response with the query, focusing on relevant sensor data. Without **LIT**, the model may choose irrelevant data, as shown when **GPT-4o** introduces extraneous, irrelevant information despite recognizing the trend. - **GPT-4o**, while capturing the general trend, fails to maintain task relevance by generating hallucinated content like **compressor stall risk**, which doesn’t align with the data, highlighting the limitations of models that lack task-specific focus. ### **Ablation Experiment Insights** Our **ablation experiments** confirm the significance of each component: - **TPE** provides the largest standalone performance gain by enabling the model to capture the sequential nature of time-series data. - **ITA** and **TPE** complement each other by allowing task-specific attention over relevant time spans, improving performance. - **LIT** is crucial for ensuring the model aligns its response with the task, particularly in real-world deployments where **task relevance** is key. While **Transformer-based architectures** are common, our work introduces **task-specific components** that are critical for time-series and natural language fusion. Through **TPE**, **LIT**, **ITA**, and **TAL**, we solve the challenges of aligning temporal data with textual queries. The **case study** and **ablation results** demonstrate the practical importance of these innovations, improving accuracy and task relevance in multi-modal time-series modeling. # **2.Dataset Contribution and Discussion on Essential References** Due to space limitations, the relevant results have been included in the rebuttal to Reviewer n3vv. Kindly refer to it for further details. # **3. Experiment Details** Due to space limitations, the relevant results have been included in the rebuttal to Reviewer n3vv. Kindly refer to it for further details. # **4.Presentation and Readability** We appreciate the reviewer’s feedback on the presentation and readability. We have carefully considered the suggestions and will make the following improvements in the revised version of the paper: 1. **Clarification of task definitions**: We will provide clearer explanations for the tasks, especially **Understanding**, **Perception**, **Reasoning**, and **Decision-Making**, to avoid ambiguity in how these tasks are delineated and their practical significance. 2. **Notational consistency**: We will ensure consistent use of notation, particularly for vectors and time-series representations, to improve clarity and prevent confusion. 3. **Figures and diagrams**: We will revise figures to make them more intuitive, including simplifying complex visualizations and adding concrete examples where applicable to enhance understanding. 4. **Flow and organization**: We will restructure sections where necessary to improve the logical flow, ensuring that the problem definition is clearly presented before the method, and enhancing transitions between sections for smoother readability. We believe these adjustments will improve the overall clarity of the paper while maintaining the depth of our contributions. # **5.Dataset Construction Adaption** Due to space limitations, the relevant results have been included in the rebuttal to Reviewer n3vv. Kindly refer to it for further details.
Summary: The paper presents ITFormer, a novel framework that bridges time series and natural language for multi-modal temporal-textual question answering. Specifically, ITFormer uses time token position encoding (TPE) to encode the time series, then uses learnable instruct tokens (LIT) to facilitate the alignment between temporal and textual modalities, and finally uses instruct time attention, enabling efficient and robust cross-modal fusion to align and fuse temporal and textual representations. Experiments on the self-constructed EngineMT-QA dataset demonstrate that ITFormer achieves better performance than existing methodologies. Claims And Evidence: The claims in the paper are well supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. Theoretical Claims: No Theorems proposed in the paper. Experimental Designs Or Analyses: The experimental designs listed in Section 5.2 to 5.5 (including ablation studies) make sense. Supplementary Material: The whole appendix was checked. Relation To Broader Scientific Literature: Multi-modal, multi-dimensional time series question answering is a well-established problem in many real-world scenarios, where the integration of time series QA with LLMs could have wide application. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength 1. This paper focuses on the problem of multi-task, temporal-textual time series reasoning and question-answering, which is of high importance yet not well explored by existing research works. 2. The ablation study in Section 5.4 and the efficiency study in Section 5.5 show that the proposed ITFormer framework consistently achieves better performance than existing methodologies, where larger LLMs always lead to better ITFormer performance, demonstrating the scalability of the proposed framework. 3. ITFormer is a lightweight framework with only ~1% of parameters needing to be tuned with frozen LLMs, which is easy to deploy in real-world applications. 4. The paper is well-written, easy to follow, and understand. Disadvantage: 1. Although the paper provides an anonymous GitHub for the code and models, the dataset cannot be accessed with specific VPNs. 2. The experiments are solely on the self-created EngineMT-QA dataset. More experiments on existing datasets could help to better demonstrate the performance of the proposed method. Other Comments Or Suggestions: N/A Questions For Authors: See disadvantages Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive and detailed feedback. We address each of your concerns below with clarifications and new experimental results. # 1. **Generalization and Benchmark Scope** Thank you for your insightful feedback on the importance of generalization in our approach. We fully recognize its significance and appreciate the opportunity to clarify this aspect. Although **EngineMT-QA** is based on aero-engine operation and maintenance data, the core challenge addressed by **ITFormer**—integrating time-series signals with natural language text—is inherently domain-agnostic. Tasks such as fault detection, trend analysis, and decision-making from multimodal data are common across various domains including healthcare, finance, and climate science. Importantly, the **design of the EngineMT-QA dataset** is modular and transferable. Our data construction pipeline consists of: (1) temporal segmentation of raw multivariate time-series, (2) instruction-style question generation grounded in operational semantics, and (3) structured answer derivation based on observable data events. This methodology is not bound to the aero-engine domain and can be adapted to other domains where time-series data and domain-specific semantics coexist. To evaluate generalization, we conducted two key experiments (each with a 6:4 train-test split): - **Experiment 1: TimeSeriesExam Dataset** We tested **ITFormer** on **TimeSeriesExam**, a domain-agnostic benchmark designed to assess time-series understanding across five categories: pattern recognition, noise understanding, similarity analysis, anomaly detection, and causality analysis [Cai et al., 2024]. ITFormer achieved **0.792** in Causality Analysis, outperforming all baselines. This confirms that ITFormer is not domain-locked and generalizes well to unfamiliar, structured time-series tasks. - **Experiment 2: Transfer Learning from EngineMT-QA → TimeSeriesExam** We pre-trained **ITFormer** on **EngineMT-QA**, then fine-tuned it on **TimeSeriesExam**. Performance improved further, especially in **Pattern Recognition** and **Anomaly Detection**, indicating that EngineMT-QA helps the model learn **domain-agnostic time-series reasoning patterns**. This shows that our dataset captures fundamental properties of time-series + language interactions that are transferable across domains. | **Model** | **Pattern Recognition** | **Anomaly Detection** | **Noise Understanding** | **Similarity Analysis** | **Causality Analysis** | | --- | --- | --- | --- | --- | --- | | **GPT-4o (image)** | 0.82 | 0.80 | 0.90 | 0.87 | 0.68 | | **GPT-4o (text)** | 0.81 | 0.75 | 0.78 | 0.80 | 0.28 | | **Gemini-1.5-Pro (image)** | 0.81 | 0.69 | 0.83 | 0.80 | 0.48 | | **Gemini-1.5-Pro (text)** | 0.82 | 0.72 | 0.90 | 0.80 | 0.68 | | **Phi-3.5-vision** | 0.10 | 0.09 | 0.18 | 0.17 | 0.07 | | **Phi-3.5-mini-instruct** | 0.42 | 0.22 | 0.22 | 0.40 | 0.22 | | **MCAN-VQA** | 0.8086 | 0.7921 | 0.9422 | 0.897 | 0.731 | | **ITFormer** | 0.8275 | 0.8391 | 0.9500 | 0.920 | 0.792 | | **ITFormer (Transferred)** | **0.8593** | **0.8891** | **0.9833** | **0.941** | **0.834** | As shown above, **ITFormer** consistently outperforms state-of-the-art models, particularly in **Causality Analysis** and **Pattern Recognition**. Its performance improves further through transfer learning, suggesting that **EngineMT-QA not only trains robust models but also captures cross-domain time-language patterns**. In summary, **ITFormer** demonstrates strong generalization to unseen domains and tasks. Furthermore, **EngineMT-QA** is not only a valuable benchmark but also a **contributive asset** to the broader field of time-series + language modeling. Its design facilitates reusable, instruction-driven QA setups, offering a foundation for building unified benchmarks and training models with cross-domain transferability. > Citation for TimeSeriesExam: > > > Cai Y, Choudhry A, Goswami M, et al. *TimeSeriesExam: A Time Series Understanding Exam*. NeurIPS Workshop on Time Series in the Age of Large Models, 2024. [arXiv:2410.14752](https://arxiv.org/abs/2410.14752) > # 2. **Dataset Accessibility** We appreciate the reviewer pointing out the dataset access issue. While the anonymous GitHub was used for submission, we acknowledge that certain VPNs may block access to the hosting service. To ensure broader accessibility, we have now provided an alternative download link via Baidu Cloud: **Dataset link**: [https://pan.baidu.com/s/19uG78pNCK3IqMIrOqzPLQA](https://pan.baidu.com/s/19uG78pNCK3IqMIrOqzPLQA) **Access code**: `9niz` We will also include this alternative in the final version of the paper to facilitate future access. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I think the response are convincing, and I will consider increasing the score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive feedback and for considering an increased score. We truly appreciate your recognition of our efforts and your valuable comments, which have helped us further improve the paper.
Summary: This paper introduces ITFormer, a novel framework that bridges time-series signals and natural language for multi-modal question answering (QA). To support this task, the authors release EngineMT-QA, the first large-scale, multi-task dataset designed to capture complex interactions between temporal data and textual queries. ITFormer integrates time-series encoders with frozen large language models (LLMs) using innovative components such as Time Token Position Encoding, Learnable Instruct Tokens, Instruct Time Attention, and Time Token as Language, enabling effective temporal-textual fusion with minimal training overhead. Experimental results show that ITFormer achieves state-of-the-art performance across diverse QA tasks, offering a scalable and efficient solution for real-world time-series understanding and decision-making. Claims And Evidence: The claims made in the paper are generally well-supported by clear and convincing evidence. The authors provide comprehensive experimental results demonstrating ITFormer’s state-of-the-art performance across multiple QA tasks with minimal trainable parameters, supported by strong baselines and detailed ablation studies. The introduction of the EngineMT-QA dataset is justified as a novel contribution, and the effectiveness of each ITFormer component (TPE, LIT, ITA, TAL) is validated through systematic analysis. While the generalization capability across different encoders and LLMs is supported to some extent, a notable limitation is that the EngineMT-QA dataset is constructed entirely from one domain, which raises concerns about the model’s ability to generalize to other real-world time-series scenarios such as healthcare, finance, or energy systems. Without evaluations on more diverse domains, it remains unclear whether ITFormer’s performance and alignment mechanisms are truly generalizable beyond this specific application context. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally well-aligned with the problem of time-series question answering, and the ITFormer framework is thoughtfully designed to bridge temporal and textual modalities. The use of a QA formulation to unify multiple time-series tasks is a reasonable and practical choice. However, a key limitation lies in the evaluation benchmark itself—EngineMT-QA is constructed solely from one domain, which significantly constrains the representativeness of the benchmark. While the tasks span understanding, perception, reasoning, and decision-making, they are all derived from a single application context, limiting the conclusions that can be drawn about the model's general applicability. As a result, it is unclear whether ITFormer would perform equally well on time-series data from other domains such as healthcare, finance, or climate monitoring. To truly validate the method's generalizability and robustness, it should include multi-domain benchmarks that reflect the broad range of real-world time-series applications. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design in the paper is generally sound and systematically structured, with evaluations conducted across four distinct QA task types: understanding, perception, reasoning, and decision-making. The use of multiple metrics (e.g., Accuracy, F1, BLEU, Rouge-L) and comparisons against both multimodal baselines (e.g., ChatGPT-4o, InstructBLIP) and domain-specific methods (e.g., Time-LLM, AutoTime) helps validate ITFormer’s performance. The inclusion of ablation studies further strengthens the reliability of the analysis by isolating the contribution of each model component. Supplementary Material: Yes, I reviewed the supplementary material, with a particular focus on the dataset construction process. The authors provide a detailed description of how the EngineMT-QA dataset was built from the N-CMAPSS aero engine dataset, including the use of domain knowledge, LLM-based refinement, and expert validation to generate question-answer pairs across four task types. Relation To Broader Scientific Literature: The paper makes a meaningful contribution to the growing literature at the intersection of time-series analysis and multimodal language modeling. Its proposed framework, ITFormer, builds upon prior advancements in time-series encoders (e.g., PatchTST, Informer, Crossformer) and multimodal integration techniques inspired by vision-language models such as InstructBLIP. Essential References Not Discussed: Merrill, Mike A., et al. "Language models still struggle to zero-shot reason about time series." arXiv preprint arXiv:2404.11757 (2024). Ye, Wen, et al. "Beyond Forecasting: Compositional Time Series Reasoning for End-to-End Task Execution." arXiv preprint arXiv:2410.04047 (2024). Chow, Winnie, et al. "Towards time series reasoning with llms." arXiv preprint arXiv:2409.11376 (2024). Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive and detailed feedback. # 1. **Generalization and Benchmark Scope**: Due to space limitations, the relevant results have been included in the rebuttal to Reviewer zRtW. Kindly refer to it for further details. # 2. **Dataset Contribution and Discussion on Essential References** Thank you for your feedback. The **EngineMT-QA** dataset is designed around four core tasks to comprehensively evaluate time-series and textual data fusion: ### **Dataset Construction and Task Design** - **Understanding**: Interpreting temporal patterns and mapping them to semantic intents expressed in natural language. - **Perception**: Inferring latent system states from raw time-series signals under semantic supervision. - **Reasoning**: Modeling temporal dependencies to support predictive inference and hypothetical outcomes. - **Decision-Making**: Integrating temporal understanding and prediction to generate actionable, context-aware responses. ### **Task and Dataset Construction Details** The dataset construction involves: - **Time-Series Data**: Extracted from real aerospace engine sensor data (temperature, pressure, vibration, etc.). - **Task Design**: Textual questions are crafted for each of the four tasks, combining time-series data and natural language queries to evaluate performance. For example: - **Understanding**: Interpreting temperature trends. - **Perception**: Diagnosing faults from sensor data. - **Reasoning**: Estimating remaining useful life from historical data. - **Decision-Making**: Recommending maintenance actions based on system health and trends. ### **Comparison with Existing Literature** We acknowledge **Cai et al. (2024)**’s **TimeSeriesExam**, which focuses on **pattern recognition** and **anomaly detection**. However, its tasks are relatively simple, mainly addressing **basic time-series perception**. In contrast, **EngineMT-QA** offers a more comprehensive framework, covering **Understanding**, **Reasoning**, and **Decision-Making**, with a focus on **real-world decision-making** and **semantic complexity**. Additionally, while **Merrill et al. (2024)** and **Chow et al. (2024)** focus on reasoning, **EngineMT-QA** evaluates multiple cognitive abilities through a **multitask approach**, making it more comprehensive. ### **Transfer Learning Experiment and Contribution** In our transfer learning experiment, models pre-trained on **EngineMT-QA** showed significant improvement on **TimeSeriesExam**, especially in **Pattern Recognition** and **Anomaly Detection**, demonstrating the dataset’s applicability beyond the aerospace domain. **EngineMT-QA** offers a comprehensive task framework for multimodal time-series and text fusion, incorporating a wider range of tasks than existing benchmarks. Its cross-domain applicability is demonstrated through transfer learning, contributing to future intelligent decision-making applications. # 3. **Experiment Details** We would like to thank the reviewers for their valuable feedback. In all comparison experiments, **ITFormer** models were trained on the **EngineMT-QA** dataset, with training conducted on the training subset and evaluation on the test subset. For a fair comparison, **existing multimodal models** like **GPT-4o** and **Gemini**, **vision-text models** like **InstructBlip**, **MCAN-VQA**, and **CoCa** were all adapted to use **time-series encoders** instead of their original image encoders. Specifically, the **PathTST** time-series encoder, which is identical to the one used in **ITFormer**, replaced the image encoder in these vision-text models. This ensures that all models are evaluated under the same conditions, using time-series data in place of images. The **training steps**, **epochs** were consistent across all models, ensuring that the performance differences between **ITFormer** and the comparison models are due to the architecture and not variations in training configurations. Thus, all the experiments were conducted with **fair comparisons**, allowing the **ITFormer** model’s performance advantages to be assessed in a consistent and controlled environment. # 4. **On Constructing More General Time-Series QA Datasets** To build broader and more transferable time-series QA datasets, we propose a **semi-automated framework** that combines: - structured time-series segmentation, - **domain-specific documents or events** (e.g., logs, manuals), - and **LLM-driven question-answer pair generation** based on grounded events and patterns. This framework allows us to create generalizable and transferable dataset, applicable to various domains like healthcare or finance, by adapting the event-based and temporal question generation process. Thus, while EngineMT-QA is built on aero-engine data, the underlying construction process is versatile and can be extended to other industries.
Summary: The paper addresses the problem of multimodal time series modelling. The main motivation is to enrich time series with natural language to enrich the time series with textual information. For this reason, a benchmark for answering time series questions is proposed, focusing on real-world aircraft engine operation and maintenance scenarios. In addition, the paper presents an approach to combine the two modalities based on pre-trained and frozen LLMs. For this purpose, a speech and time series encoder is introduced and later combined with a fusion encoder. A decoder finally outputs the response to the input time series and textual description. The evaluation is standard and shows that the proposed approach leads to promising results compared to the prior work. ## update after rebuttal After reading the reviews and the author's feedback, I am increasing my score to weak acceptance. The rebuttal did a very good job of addressing my concerns as well as the open points of the other reviews. Claims And Evidence: The problem has often been approached using ML methods. The proposed methodology is well suited to the problem. Moreover, the evaluation is done on standard datasets. Methods And Evaluation Criteria: The evaluation is sufficient. However, a general QA dataset would be relevant to the paper. Theoretical Claims: The paper does not make theoretical claims. Experimental Designs Or Analyses: The proposed benchmark is well designed. Supplementary Material: It provides additionally information for the dates. Relation To Broader Scientific Literature: The new benchmark is novel compared to the current literature. Essential References Not Discussed: The prior discussion is fine. Other Strengths And Weaknesses: Strengths: - The paper is well written and easy to follow. - The proposed benchmarks are a valid contribution. - The proposed benchmark is valuable for application specific scenarios. Paper weaknesses: - The benchmark focuses on the operation and maintenance of aero engines. It does not explore the generalisation of the questions to a broader framework. It is well thought out and designed, but it doesn't necessarily advance the machine learning community as it is purely application specific. This is a limitation for submission to ICML, but not in general. It would fit very well in an application track of a machine learning venue. - The proposed methodology is common in the literature. From a methodological point of view, there are no new elements/novelties in the paper. - The proposed approach outputs text instead of predicted time series. It would be important to include ablation studies when outputting time series. - Table 1 could include the number of parameters for each model. The proposed approach may be more expensive than the previous work. Other Comments Or Suggestions: No Questions For Authors: Discuss the possibility to include a more generic benchmark. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: # 1. **Generalization and Benchmark Scope:** Thank you for your valuable comments. We conducted additional experiments to evaluate generalization beyond the aero-engine domain: 1. **ITFormer generalizes across domains**, achieving top performance on the domain-agnostic TimeSeriesExam benchmark. 2. **EngineMT-QA reflects core challenges of temporal-text interaction**, as shown by consistent gains in cross-domain transfer learning. (Details provided in our full response to Reviewer zRtW.) # 2. **Ablation studies when outputting numerical values** We thank the reviewer for highlighting this important point. While our original setting unified all outputs in natural language form, we agree that evaluating the model’s ability to express **numerical values directly** is crucial for practical multi-modal time-series applications, where numerical precision is often required for decision-making. To address this, we introduced an ablation experiment with a **mixed-output setting**, where the model is required to generate both **structured numerical strings** and natural language in different tasks. Specifically: - **Task 2 (Perception)**: The model predicts the **health index** of a component (e.g., Component A: `0.87`), followed by a decision of whether the health index exceeds a threshold, resulting in a textual judgment (e.g., "Fault" or "No Fault"). - **Task 3 (Reasoning)**: The model predicts the **Remaining Useful Life (RUL)** of a system, and based on the RUL, determines a specific operational range (e.g., `30%-50%` ), followed by an accuracy measure to evaluate if the predicted range is correct. For these tasks, the output is structured as numerical strings, such as: `"Component A: 0.87, RUL: 43"`. Both the **structure** and **numeric accuracy** are crucial for evaluation. | ID | Main | ITA | TPE | TAL | LIT | Output Type | Accuracy | BLEU | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | (**h'**) | ✅ | ✅ | ✅ | ✅ | ✅ | Mixed (Text + Num) | **68.12** | **56.74** | | (h) | ✅ | ✅ | ✅ | ✅ | ✅ | Text | **73.29** | **60.27** | As shown in Row **(h’)** of the updated ablation table, the mixed-output setting results in a slight performance drop compared to the full natural language baseline (Row h): - **Accuracy**: 73.29 → **68.12** - **BLEU**: 60.27 → **56.74** We believe this performance difference is due to several reasons: - **Format sensitivity**: Numerical outputs are evaluated with strict string matching—minor deviations (e.g., `"0.87"` vs. `"0.870"`) can cause hard penalties in accuracy and BLEU. - **Dual-mode decoding complexity**: Generating both text and structured numbers increases the output space and decoding difficulty, requiring the model to switch between narrative and precise formats. - **Weaker supervision for numbers**: Unlike text, numerical outputs lack rich contextual anchors and often require implicit regression, making them harder to learn accurately. Moving forward, we plan to explore further the combination of structured text generation and direct numerical regression, as well as develop a **standardized dual-modality evaluation benchmark** that covers both output types—textual and numerical. # 3. **On Model Efficiency and Trainable Parameters:** Thank you for your suggestion. We will update Table 1 in the revised version to include the number of parameters for each model. Despite using only 30.94M trainable parameters across all variants, ITFormer consistently outperforms larger models like AutoTime (205M) and InstructBlip (190M), achieving superior accuracy and BLEU scores. his demonstrates that ITFormer achieves an excellent balance between performance and efficiency—detailed comparisons are visualized in https://imgur.com/a/Y53oD5B. Here is the updated parameter comparison: | **Model** | **Total Parameters** | **Trainable Parameters** | | ----------------- | -------------------- | ------------------------ | | **InstructBlip** | 7.19B | 190M | | **MCAN-VQA** | 7.04B | 35.2M | | **CoCa** | 1.08B | 78.74M | | **Time-LLM** | 7.09B | 86.54M | | **AutoTime** | 7.21B | 205M | | **ITFormer-0.5B** | 0.53B | 30.94M | | **ITFormer-7B** | 7.03B | 30.94M | | **ITFormer-14B** | 14.03B | 30.94M | We hope this clarifies the efficiency of **ITFormer** in terms of both performance and computational cost. # 4. **On Methodological Novelty** Due to space limitations, the relevant results have been included in the rebuttal to Reviewer ZFEy. Kindly refer to it for further details. # 5. **On Constructing More General Time-Series QA Datasets** Due to space limitations, the relevant results have been included in the rebuttal to Reviewer n3vv
null
null
null
null
null
null
Learning Invariant Causal Mechanism from Vision-Language Models
Accept (poster)
Summary: This work aims to leverage Invariant Causal Mechanisms in causality to improve prediction under distribution shifts. However, a detailed summary is challenging for me due to several fundamental issues, including an unclear problem formulation, misconceptions of key concepts, and unrealistic theoretical assumptions. Claims And Evidence: There are several unclear or even mistaken claims in the paper. For example: 1) Problem Setting: The claim "The goal of OOD generalization is to learn a predictor from training environments... domain shift and open-class scenarios. Domain shift arises when the data distribution in the test environment differs from that in the training environment, while open-class scenarios involve previously unseen classes appearing at test time." is confusing. Domain shift is a broad category encompassing various settings, such as covariate shift, conditional shift, and label distribution shift. The authors should specify which type of domain shift their work addresses to avoid ambiguity. Open-class scenarios, where previously unseen classes appear at test time, present a significant challenge. The authors should clarify whether this setting is realistically addressable and, if so, whether there exists a theoretical solution for it. 2) Conceptual Misuse: You have assumed a causal generative model, as shown on the left in Figure 1, where there is a clear causal relationship, e.g., y causes z, and z causes x. Causal mechanisms should be defined from cause to effect, rather than as p(y∣do(x)) or p(y∣do(z)), as claimed. From my understanding, a causal mechanism refers to the underlying process or system that explains how one variable influences another in a causal relationship. It describes how causes bring about effects, and is typically assumed to be invariant. Therefore, one cannot claim that the relationship from effect to cause constitutes a causal mechanism, as this relationship is generally variant and does not align with the principles of causal inference. Further, in Proposition 5.1, p(y|do(z)) or p(y|do (x)) (e.g., "cause given the effect") typically does not have a well-defined meaning in the standard framework. Please let me know if I have misunderstood this. Methods And Evaluation Criteria: Since the identifiability theory is problematic (see below for concerns regarding the theory), I am not confident that the method's effectiveness is due to causality. Theoretical Claims: Theorem 5.3 is central to supporting this work. However, the assumption in Condition 5.2 is quite peculiar. It broadly states, "There exist some samples such that the inference model can be equal to the generative model on these samples." This is strange, because the generative model is completely unknown. How can one enforce the inference model to match an unknown prior from the generative model? If this assumption holds, one could simply assume that the inference model equals the generative model, which would make the proof trivial. In fact, after reviewing the proof, I found that there is almost no technical challenge to the identifiability proof under the assumption in Condition 5.2. Experimental Designs Or Analyses: For the experiment results based on CLIP, there is a significant concern regarding whether the training process of CLIP truly does not use the data in the experiments. Since CLIP is trained on a large number of image-text pairs, it’s important to question whether there is any potential data leakage. Specifically, it should be clarified whether the data used in the experiments overlaps with or has any connection to the data CLIP was trained on, as this could lead to biased or invalid results. Ensuring that no data leakage occurs is critical to maintaining the integrity of the experiment's findings. Supplementary Material: Yes, I reviewed the proof for the theorem. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Causality is a challenging concept to understand. I believe it is particularly effective in handling distribution shift tasks, as it not only provides a theoretical framework but also offers practical tools in certain cases. However, we must be cautious in how we apply it, and at the very least, it requires a deep understanding of causality. Questions For Authors: Overall, 1) the problem setting is unclear, and some fundamental concepts in causality are misused (see Claims And Evidence). 2) The identifiability analysis is unrealistic and nearly flawed (Theoretical Claims), which undermines confidence in the proposed methods. 3) Additionally, using the CLIP model to claim OOD distribution experiments should be approached with caution and carefully considered (see Experimental Designs Or Analyses). Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ## Re: Claims And Evidence & Q1 **Regarding the definition of domain shift and open-class.** 1. In our context, domain shift primarily refers to covariate shift, where the $p(x)$ differs between the training and testing phases while $p(y|x)$ remains unchanged. This scenario is widely adopted in standard domain generalization tasks [1]. 2. The open-class prediction problem is well defined [2–5]. The problem is addressable, and one of the solution is CLIP [3]. Several studies [4,5] also provide theoretical support. We will include detailed explanations regarding these scenarios in the final version. **Regarding the causal mechanism in Section 5.1.** We clarify as follows: 1. The construction of SCM depends on the **type of task** [10]. For example, whether the chicken causes the egg or the egg causes the chicken depends on the objective: if we study how chickens produce eggs, the chicken is the cause and the egg is the effect; if we study how eggs develop into chickens, then the egg is the cause. Therefore, when we study the task of prediction, the input image is the cause, and the predicted label is the effect. 2. We study **two** SCMs in our paper: Figure 1(a) is the generation process, while Figure 1(b) is the prediction process. Since our work primarily focuses on prediction, the causal mechanisms $p(y|do(x))$ and $p(y|do(z_{inv}))$ in Proposition 5.1 are both defined based on the SCM **in Figure 1(b)** rather than Figure 1(a). In Figure 1(b), the model construct a causal chain $X \to Z \to Y$, where $X$ is the cause, $Y$ is the effect. Therefore, it **does not means inferring the cause from the effect**. In this SCM, the causal effect is: "how change in image $X$ affects the prediction of output $Y$". 3. As stated in lines 194–197, the prediction process can be viewed as the inverse of the data generation process. We emphasize that the term “inverse process” here is **solely a mathematical construct** to derive the structural equations. These equations correspond to edges in Figure 1(b), which we further provide a detailed explanation in lines 197–208. Therefore, Figure 1(b) is a valid SCM. In summary, our proposed SCM doesn't contain "cause given the effect" scenarios. And the causal mechanism in Figure 1(b) is valid and well-defined. ## Re: Theoretical Claims & Q2 1. The reviewer may have misunderstood the logical connection between Condition 5.2 and Theorem 5.3. Theorem 5.3 aims to prove identifiability under certain conditions, and our work focuses on **formulating such a condition**—namely, Condition 5.2. Therefore, although the proof of Theorem 5.3 is relatively straightforward, its validity relies on the formulation of Condition 5.2. 2. Literature [6] proves that without additional constraint, latent factors are unidentifiable. Motivated by literatures [7,8,9], we **identify and formalize** this condition in this paper, thereby facilitating a clear understanding and straightforward implementation of the proof for Theorem 5.3. 3. The **advantage** of this condition is that it **does not require** additional assumptions on the latent factors (prior distribution) or on the generative process. Instead, it relies solely on observable label data. 4. The condition consists of two parts: consistency and diversity. The consistency (Equation 12) only requires the output distribution $\hat{p}(y|x)$ to match the **observable distribution** $p(y|x)$ , rather than an unknown prior. 5. We demonstrate in lines 247~255 why CLIP can be considered satisfied this condition. In summary, the Condition 5.2 and Theorem 5.3 are formulated within a standard theoretical framework based on extensive prior work and have practical implications. ## Re: Experimental Designs Or Analyses & Q3 We understand the reviewer's concerns, but: 1. To date, OpenAI has not released the training data used for CLIP, which makes it extremely challenging to verify whether there is any overlap between the experimental data and the data used to train CLIP. 2. Our experimental design strictly adheres to the established community standards for fine-tuning CLIP in domain generalization tasks. (including CLIP-Adapter, CLIPood, CoOp, CoCoOp, MIRO, and DPL) Therefore, we believe that our experimental setup is both reasonable and widely accepted. [1] Domain generalization: A survey. [2] A survey of zero-shot learning: Settings, methods, and applications. [3] Learning transferable visual models from natural language supervision. [4] Zero-shot learning with semantic output codes. [5] Attribute-based classification for zero-shot visual object categorization. [6] Nonlinear independent component analysis: Existence and uniqueness results. [7] Nonlinear ICA using auxiliary variables and generalized contrastive learning. [8] On linear identifiability of learned representations. [9] Contrastive learning inverts the data generating process. [10] Toward causal representation learning. --- Rebuttal Comment 1.1: Comment: **In our context, domain shift primarily refers to covariate shift**.. --I respectfully disagree. Domain shift can generally be categorized into several specific settings, including covariate shift, target shift, conditional shift, and conditional-target shift [1,2]. **The open-class prediction problem is well defined [2–5]. The problem is addressable, and one of the solution is CLIP [3]** --How do you ensure that the training data used for CLIP does not include previously unseen classes from the testing data? **We study two SCMs in our paper:**, --For a given context, there should typically be only one causal model, as a causal model aims to represent a physical process. One cannot claim two models for the same context, as the corresponding physical process is determined and unique. You have defined data generation in Figure 1a. In this context, Figure 1b—which you acknowledge as a predictive model—should only be understood as an inference model." **Condition 5.2 and Theorem 5.3.** --Theorem 5.3 is based on Condition 5.2. If Condition 5.2 is not satisfied, Theorem 5.3 does not hold. From a high-level perspective, Condition 5.2 requires that the estimated z (the left-hand side of Eq. 2, $\hat{z}= f_{I}(x) = f\_{I}(g(z))$ ) matches the ground-truth $z$ (the left-hand side of Eq. 2, where $ f\_{I*}(x) = g^{-1}(x) = g^{-1}(g(z))=z$). Consequently, you assume that $\hat{z} = z$, which is the objective of identifiability.......Moreover, one does not know the ground-truth $z$. Even if one were to assume it, how, then, could this condition be incorporated into the inference model? [1] Zhang, Kun, et al. "Domain adaptation under target and conditional shift." International conference on machine learning. Pmlr, 2013. [2] Stojanov, Petar, et al. "Domain adaptation with invariant representation learning: What transformations to learn?." Advances in Neural Information Processing Systems 34 (2021): 24791-24803. --- Reply to Comment 1.1.1: Comment: **Response to Comment 1:** Indeed, the understanding of domain shift is as the reviewer described. However, what we intended to express is that our submission **focuses specifically** on covariate shift, that is, the discrepancy between the training and testing data distributions. **Response to Comment 2:** Since the composition of CLIP's training dataset has not been publicly released, we are unable to directly verify its contents. To further investigate this issue, we propose an experimental approach. The basic idea is as follows: if the dataset used in our submission were included in CLIP’s training data, then testing CLIP directly on this dataset should yield strong performance. | Method | IMAGENET-S | IMAGENET-A | Terra Incognita | iWildCam-WILDS 2020 | | :------------: | :--------: | :--------: | :-------------: | :-----------------: | | CLIP Zero-shot | 46.1 | 47.8 | 34.2 | 10.6 | | Ours | 50.9 | 51.4 | 52.5 | 14.1 | The results of our test are shown in the table below. We observe that CLIP's performance is clearly suboptimal when tested directly. This supports our claim that the dataset is not included in CLIP's training data, and also validates the soundness of our experimental design. **Response to Comment 3:** In our previous response, we provided an example: *Which came first, the chicken or the egg?* This example was intended to illustrate the following point: while it is true that the SCM remains invariant, the true SCM is also unknown. We can only infer it based on empirical observations and reasoning. As a result, different interpretations may lead to different SCMs. In this paper, we present two such interpretations: one from the perspective of data generation, and the other from the perspective of data prediction. These two interpretations form a closed loop—they are mutually reversible. Building on this, the remainder of the paper develops the framework from the prediction-oriented perspective. **Response to Comment 4:** Condition 5.2 does not imply that $\hat{z}=z$. We provide a detailed explanation below. Consider a training dataset $$ \mathcal{D} = \{(x_i, t_i)\}_{i=1}^N, $$ sampled from the joint distribution $p(x,t)$. Let $\mathcal{T}$ denote the set of all possible values of $t$. Let $\theta$ denote the parameters of $f_I$ and $f_T$, and let $\theta^*$ denote the parameters of $f_{I^*}$ and $f_{T^*}$ (to which we have no access). The ground-truth conditional probability can be regarded as produced by $f_{I^*}$ and $f_{T^*}$: $$ p_{\theta^*}(t\mid x,\mathcal{T}) = \frac{\exp(f_{I^*}(x)^\top f_{T^*}(t))}{\sum_{t'\in\mathcal{T}} \exp(f_{I^*}(x)^\top f_{T^*}(t'))} =\begin{cases} 1, & \text{if } (x,t)\in\mathcal{D},\\\\ 0, & \text{otherwise}. \end{cases} $$ Similarly, the CLIP model functions $f_I$ and $f_T$ produce the distribution $$ p_{\theta}(t\mid x,\mathcal{T}) = \frac{\exp(f_{I}(x)^\top f_{T}(t))}{\sum_{t'\in\mathcal{T}} \exp(f_{I}(x)^\top f_{T}(t'))}. $$ The training objective for CLIP is to minimize the KL divergence $$ \mathbf{KL}\Bigl(p_{\theta}(t\mid x,\mathcal{T}) \Vert p_{\theta^*}(t\mid x,\mathcal{T})\Bigr). $$ Ideally, after training, we have $$ p_{\theta}(t\mid x,\mathcal{T}) = p_{\theta^*}(t\mid x,\mathcal{T}), $$ that is, $$ \frac{\exp(f_{I}(x)^\top f_{T}(t))}{\sum_{t'\in\mathcal{T}} \exp(f_{I}(x)^\top f_{T}(t'))} = \frac{\exp(f_{I^*}(x)^\top f_{T^*}(t))}{\sum_{t'\in\mathcal{T}} \exp(f_{I^*}(x)^\top f_{T^*}(t'))}. $$ This equality illustrates the consistency aspect of Condition 5.2. Building on this, for any pair $t_a$ and $t_b$ the following ratio should hold: $$ \frac{p_{\theta}(t_a\mid x,\mathcal{T})}{p_{\theta}(t_b\mid x,\mathcal{T})} = \frac{p_{\theta^*}(t_a\mid x,\mathcal{T})}{p_{\theta^*}(t_b\mid x,\mathcal{T})}, $$ which implies $$ \frac{\exp(f_{I}(x)^\top f_{T}(t_a))}{\exp(f_{I}(x)^\top f_{T}(t_b))} = \frac{\exp(f_{I^*}(x)^\top f_{T^*}(t_a))}{\exp(f_{I^*}(x)^\top f_{T^*}(t_b))}. $$ Taking logarithms on both sides, we obtain $$ \bigl(f_T(t_a) - f_T(t_b)\bigr)^\top f_I(x) = \bigl(f_{T^*}(t_a) - f_{T^*}(t_b)\bigr)^\top f_{I^*}(x). $$ Moreover, the diversity condition requires that there exist at least $D+1$ pairs $(t_a, t_b)$ such that different $f_T(t_a)-f_T(t_b)$ form different basis for some space $L$, and different $f_{T^*}(t_a)-f_{T^*}(t_b)$ form different basis for another space $L'$. Consequently, we have $$ f_I(x) = \bigl(L' L^{-1}\bigr)^\top f_{I^*}(x) = A f_{I^*}(x), $$ indicating that $f_I(x)$ is a linear transformation of $f_{I^*}(x)$. Note that the matrix $A$ is unknown. Thus, Condition 5.2 does not require any knowledge of $f_{I^*}$ or $f_{T^*}$, nor does it necessitate knowing the ground-truth $z$. Instead, we only assume that the data distribution is generated by these underlying functions.
Summary: The paper analyzes the OOD generalization of CLIP via the lens of causal/invariant predictor learning, where the goal is to make predictions via the invariant (causal) features for the downstream task. Motivated by the failure cases of naive funetuning of CLIP, the authors propose CLIP-ICM as a principled approach. The proposed approach relies on the linear identifiability guarantees in CLIP's representation space, which is further disentangled into invariant and environment specific features by leveraging interventional data. With the identified invariant features, CLIP-ICM train a linear probe for making predictions in the downstream task. CLIP-ICM is benchmarked on widely used OOD generalization datasets, where it outperforms baselines (existing strategies for finetuning CLIP) especially for the case of open class domain shifts. ## Update after rebuttal I have read the author rebuttal and other reviews as well. I think the paper is very interesting, technically sound, and make a good use for proposing methodology inspired from the latent identification literature. Hence, I retrain my rating and vouch for acceptance. Claims And Evidence: Yes, the claims made in the submission are well supported with clear and convincing evidence. **Strong empirical evidence for the claims** - The failure cases with naive finetuning strategies of CLIP are highlighted clearly with experiments on the Terra Incognita dataset (Table 1). - The main experiments in Table 2 test CLIP-ICM with a variety of baselines on multiple benchmarks, with CLIP-ICM providing improved performance in nearly all the cases. Further, experiments in Table 3 with results for the open classes domain shift strengthen the author's claim of superior OOD generalization. - Given the requirement of interventional data in CLIP-ICM, the authors generate interventional data to test by manipulating both the base images and captions. This helps to analyze CLIP-ICM's performance with access to diverse interventional data, and ablations CLIP-ICM$^{\star}$ and CLIP-ICM$^{\dagger}$ provide further details. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem at hand. All the benchmarks used in this paper are widely used for out-of-distribution generalization. Regarding baselines for finetuning CLIP, I am not the best judge if all the relevant baselines have been used, since I am not familiar with recent works on CLIP finetuning. Theoretical Claims: Yes, I checked the correctness of the proof for all the theorems and I did not find any major issues. Experimental Designs Or Analyses: Yes, I checked the soundness/validity of all the experiments in the paper, and the experiment design doesn't have any flaws. Further, the authors have done a good job at analyzing their findings, its coherent with the experiment results. Supplementary Material: Yes, I checked all parts of the supplementary material. Relation To Broader Scientific Literature: The papers utilizes the methodology of invariant predictor learning, a fairly common approach for tackling out-of-distribution generalization. Specifically, identifiable invariant predictor learning approaches have been proposed in prior works [1, 2]. The key contribution of the paper is to apply these ideas in the framework of CLIP. References - [1] Lu, Chaochao, Yuhuai Wu, José Miguel Hernández-Lobato, and Bernhard Schölkopf. "Invariant causal representation learning for out-of-distribution generalization." In International Conference on Learning Representations. 2021. - [2] Yao, Dingling, Dario Rancati, Riccardo Cadei, Marco Fumero, and Francesco Locatello. "Unifying Causal Representation Learning with the Invariance Principle." arXiv preprint arXiv:2409.02772 (2024). Essential References Not Discussed: No, I believe all essential references have been discussed to the best of my knowledge. The authors have written have a very detailed related works section. Other Strengths And Weaknesses: **Strengths** - The paper is well written, with details about the proposed method easy to follow and the empirical findings are clear and easy to follow. **Weaknesses** - The core ideas behind CLIP-ICM are not very original, the theoretical results in the paper mostly build upon existing proof techniques in the literature. Even the methodology of extract invariant features from representation with linear identification guarantees is not very novel. However, I don't think this is a major concern, as the application of identifiable invariant feature learning specifically to CLIP framework is novel to the best of my knowledge. Other Comments Or Suggestions: - Given that the theoretical results (Theorem 5.3, 5.4) are mostly an application of existing theoretical results, I suggest the authors should rename the theorems to propositions. - Just like the authors mention that Theorem 5.3 aligns with results in prior works, the same should should be done for Theorem 5.4 with the prior work by Ahuja et al. 2023 on interventional causal representation learning. Questions For Authors: - The description in section 4 about the domain shift is a bit confusing. The authors mention with linear probe the CLIP embeddings are kept frozen (line 153), but when analyzing the results they mention finetuning CLIP (line 168). Were the CLIP representations finetuned with linear probe or only a linear probe was trained with frozen representations? - Table 1, open class domain shift scenario, why does finetuning improve the performance for base classes but it deteriorates the performance for novel classes? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful evaluation and positive feedback. We appreciate the acknowledgment that our approach offers a **solid** theoretical foundation and demonstrates **clear** empirical benefits for OOD generalization. We also value the reviewer’s recognition that our **claims are well-supported** by both theoretical analysis and practical experiments, as well as the confirmation that our references to existing literature provide **sufficient** context. Moreover, we are pleased that the reviewer finds our writing to be **clear** and our explanation of the proposed method to be **thorough**. Below, we address the reviewer’s additional questions and suggestions in detail. ## Response to Weaknesses We appreciate the reviewer’s positive comments and would like to clarify our position. We acknowledge that Theorem 5.3 and Theorem 5.5 indeed build upon previous work in the literature. However, our primary interest lies in extending these interesting theoretical insights to practical applications. In our manuscript, we carefully discuss the conditions under which Theorem 5.5 and Theorem 5.6 hold, and we leverage these conditions to propose our CLIP-ICM method, which is designed to guarantee lower OOD generalization error. One particularly surprising and encouraging observation is that by mapping both image and text embeddings into a shared invariant subspace, CLIP is able to maintain its original zero-shot performance even when confronted with domain shift—thus, ensuring that it continues to perform well on new classes after task-specific fine-tuning. We are grateful for the reviewer’s recognition of our work and believe that integrating theoretical results with real-world application strategies represents a significant contribution to the field. ## Response to Other Comments or Suggestion 1. > Given that the theoretical results (Theorem 5.3, 5.4) are mostly an application of existing theoretical results, I suggest the authors should rename the theorems to propositions. Thank you for your suggestion. In the final version, we will change these two theorems into propositions. 2. > the same should be done for Theorem 5.4 with the prior work by Ahuja et al. 2023 on interventional causal representation learning. We thank the reviewer for this suggestion. In the final version of the manuscript, we will explicitly highlight both the connections and distinctions between our Theorem 5.4 and the interventional causal representation learning work by Ahuja et al. (2023). ## Response to Questions For Authors 1. > Were the CLIP representations finetuned with linear probe or only a linear probe was trained with frozen representations? We apologize for the confusion caused by our description, and thank the reviewer for highlighting this issue. To clarify, in the domain-shift scenario (line 153), the CLIP embeddings remain frozen, and only the linear probe is trainable. At line 168, when we mentioned "fine-tuning," we were referring specifically to the linear probe training process, rather than updating the original CLIP image encoder or text encoder. We will carefully revise this description in the final manuscript to clearly differentiate these two settings and avoid further confusion. 2. > Table 1, open class domain shift scenario, why does finetuning improve the performance for base classes but it deteriorates the performance for novel classes? We thank the reviewer for raising this insightful question. This phenomenon—where fine-tuning improves performance on the base classes but degrades performance on novel classes—has been widely acknowledged in studies on adapting CLIP, including CoOP, CoCoOp, CLIP-Adapter, and CLIPood. As noted by Wortsman et al. [1] and Shu et al. [2], naively fine-tuning CLIP often results in a loss of its inherent strong generalization ability, manifesting as improved performance on the specifically fine-tuned downstream task but significantly weakened robustness under distribution shift (including both covariate shift and label shift). A likely explanation for this deterioration on novel classes is tied to **catastrophic forgetting**, a phenomenon wherein a model “forgets” previously learned information when trained on new data. In the context of Table 1, when adapting CLIP to a set of base classes, the fine-tuning procedure heavily optimizes for accurate classification of those base classes. Consequently, the original parameters—particularly those responsible for generalizing to unseen classes—are overwritten. As a result, the previously robust zero-shot capability of CLIP (which was central to its strong open-class performance) is compromised. [1] Wortsman, Mitchell, et al. Robust fine-tuning of zero-shot models. CVPR 2022. [2] Shu, Yang, et al. Clipood: Generalizing clip to out-of-distributions. ICML 2023. --- Rebuttal Comment 1.1: Comment: Thanks a lot for the rebuttal! I think the paper is very interesting, technically sound, and make a good use for proposing methodology inspired from the latent identification literature. Hence, I retrain my rating and vouch for acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful review and kind recognition. We truly appreciate the time and effort you dedicated—it means a lot to us and encourages our continued work!
Summary: This work is motivated from the OOD generalization issue in CLIP, it addresses this problem via learning an invariant causal mechanism and proposes CLIP-ICM framework, which includes collecting interventional data, estimating a linear projection matrix, and predicting in the invariant subspace. The proposed CLIP-ICM shows improvement in OOD datasets. ## update after rebuttal I appreciate the authors for their response, which addresses most of my concerns. I intend to maintain my original score. Claims And Evidence: In general, the claims are well supported. The paper originates from a well-studied principle in invariant learning, the high-level idea is not new, but it would still contribute to the CLIP model generalization. Methods And Evaluation Criteria: The pipeline of the method is generally clear, but some of the technical details may need further clarification. For example, it would be better to include a more detailed description of the interventional data generation process. Theoretical Claims: The theoretical analysis looks good to me. Experimental Designs Or Analyses: The experiment would be improved if further diverse environments and contexts are included. Supplementary Material: Yes, I have reviewed the appendix. Relation To Broader Scientific Literature: The paper is related to CLIP applications in different domains. Essential References Not Discussed: N/A Other Strengths And Weaknesses: In general, the paper would be further improved with a more comprehensive evaluation. Other Comments Or Suggestions: There are a few informal writing styles, e.g., the footnote in page 4 takes a single sentence in the main text. Questions For Authors: Could the authors share more details of the environment diversity and applicable scenarios of the proposed method? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and positive feedback. We are pleased that the reviewer recognizes our work as **well-supported**, highlighting our **clear** pipeline and **sound** theoretical analysis. Below, we provide detailed responses addressing the specific concerns raised by the reviewer. ## Response to Methods and Evaluation Criteria We appreciate the reviewer’s comments regarding the interventional data generation process. We would like to clarify the following points: 1. In the original manuscript, lines 369–371 (left column) briefly outline the steps for collecting image-based interventional data, while lines 381–384 (left column) and line 330 (right column) briefly describe the process for collecting text-based interventional data. And we mention that the detailed collection process is provided in Appendix H.1. 3. In Appendix H.1, we present detailed description of the collection procedures for both image-based and text-based interventional data: 1. For the image-based interventional data, we explain that it is generated using eight data augmentation techniques, including: ColorJitter, GrayScale, GaussianBlur, RandomInvert, RandomRotation, RandomPosterize, RandomSolarize, and RandomEqualiz. These data augmentation techniques directly implemented from the *torchvision.transform* package. 2. The text-based interventional data comprises two components: the text description model and the text intervention model. Both models are generated by invoking GPT-4o. The prompts used for these models are provided on page 21, lines 1118–1142, and Figure 5 presents an example of the text-based interventional data. Please let us know if you have any additional suggestions for further improvements regarding this aspect. ## Response to Experimental Designs Or Analyses To address your concerns, in addition to the datasets used in the original manuscript (PACS, VLCS, OfficeHome, Terra Incognita, DomainNets, ImageNet, ImageNet-V2, ImageNet-S, ImageNet-A, and ImageNet-R), we additionally conducted experiments on the iWildCam-WILDS 2020 dataset. iWildCAM comprises 203,029 images from 182 different animal species, which were collected from 323 camera traps distributed across various locations. The images obtained from different locations exhibit variations in lighting, color, camera angle, background, vegetation, and relative animal frequencies. We follow the setting of Koh et al. (2021)[1], use images from 243 locations as the training domain and those from 48 other locations as the test domain. We report the average macro F1 score of CLIP, CLIP-ICM$^*$, CLIP-ICM$^\dagger$, and CLIP-ICM under both ID and OOD conditions, as shown in the table below: | Method| ID (48 Locations) | OOD (243 Locations) | | :- | :-: | :-: | | CLIP|14.2|10.6| | CLIP Linear-Probe|54.6|41.4| | CLIP-ICM$^*$|15.6|13.3| | CLIP-ICM$^*$ Linear-Probe|56.2|42.1| | CLIP-ICM$^\dagger$|15.2|12.2| | CLIP-ICM$^\dagger$ Linear-Probe|55.6|44.3| | CLIP-ICM|15.8|14.1| | CLIP-ICM Linear-Probe|57.1|46.1| [1] Koh et al. Wilds: A benchmark of in-the-wild distribution shifts. ICML 2021. ## Response to Other Strengths And Weaknesses Thank you for suggesting a more comprehensive evaluation. We have extensively validated our method across multiple datasets, including PACS, VLCS, OfficeHome, Terra Incognita, DomainNets, ImageNet, ImageNet-V2, ImageNet-S, ImageNet-A and ImageNet-R along with detailed ablation studies. Additionally, we have included experiments on the iWildCam-WILDS 2020 dataset in our previous response. Moreover, in our reply to Reviewer Pp7f, we added ablation studies concerning the role of $A_{inv}$. Please let us know if you have any further suggestions regarding other experiments. ## Response to Other Comments Or Suggestions We thank the reviewer for pointing out this issue. In the final version, we will check all formatting issues and make the necessary revisions. ## Response to Questions For Authors We thank the reviewer for the question and are happy to elaborate. 1. The environmental diversity across our evaluation datasets comes in different ways. For datasets like PACS, Office-Home, DomainNet, ImageNet-Sketch, and ImageNet-R, diversity primarily stems from variations in visual styles. In VLCS and Terra Incognita, it is reflected in background complexity, lighting conditions, and camera viewpoints. For ImageNet V2 and ImageNet-A, diversity arises from changes in image sources and the inclusion of hard-to-classify samples, respectively. 2. Our method is generally applicable to real-world scenarios where environment-induced distribution shifts occur. Potential applications include monitoring in wildlife habitats, perception systems in autonomous driving, and cross-domain image-text retrieval. In particular, tasks that require stable semantic understanding across diverse environments can benefit from the CLIP-ICM framework’s ability to isolate invariant semantic factors from CLIP representations.
Summary: This paper introduces CLIP-ICM, a framework that improves CLIP’s OOD robustness by leveraging a causal perspective to separate invariant and variant factors. By learning a linear mapping to the invariant subspace using interventional data, CLIP-ICM enhances performance across multiple OOD datasets. Claims And Evidence: To the best of my knowledge, the evidence supports the claims well. Methods And Evaluation Criteria: To the best of my knowledge, the evaluation follows the community convention. Theoretical Claims: I have checked section 5.1 and do not find any issue Experimental Designs Or Analyses: I have checked section 7 and believe it follows the community standard Supplementary Material: I have checked the supplementary material A/B/C Relation To Broader Scientific Literature: The paper studies the OOD from a causal inference perspective, which bridges the gap between the two fields Essential References Not Discussed: n/a Other Strengths And Weaknesses: 1. I am curious on the role of A_inv and the interventional data. After I check the ablation study, seems like there is no ablation study on this level. would be beneficial to present the ablation of the three steps in figure 3 to help reader to understand the importance of each component. 2. Where does the variance come from in the data reported in Table 2 and 3? Is it from the difference of interventional data or different initialization? It would be beneficial to have clarity don't that 2. Overall the paper's presentation is very clear and comprehensive, and studies an important problem that lies in the interest of the community. Other Comments Or Suggestions: see above Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and valuable suggestions. We sincerely appreciate the reviewer for their positive feedback, especially for finding our claims **well-supported**, recognizing the **clarity** and **comprehensiveness** of our paper's presentation, affirming that our evaluation methodology **aligns with community standards**, and highlighting our contribution in **bridging** causal inference and OOD generalization. Additionally, we provide detailed responses to address the two specific concerns raised by the reviewer as follows. ## Response to Other Strengths And Weaknesses ### W1: We appreciate the reviewer's interest in understanding the contribution of the linear projection matrix $A_{inv}$ and the role of interventional data. Regarding the role of interventional data, we would like to first emphasize a few points: 1. Interventional data and the linear projection matrix $A_{inv}$ are mutually dependent components of our method. According to Equation (9), without interventional data, it is unlikely to estimate $A_{inv}$ in our framework. 2. As shown in Tables 2, 3, 5, 6, and 10–14, we have conducted extensive comparisons between three variants of our method utilizing different type of interventional data. Specifically: - CLIP-ICM$^*$: using only image-based interventional data, - CLIP-ICM$^\dagger$: using only text-based interventional data, - CLIP-ICM: using both types of interventional data. 3. We have provided an ablation study on the effect of different numbers of interventional data pairs in Appendix M, Figure 6 (d). Regarding the role of $ A_{inv} $, we agree with your suggestion and thus we include an additional ablation experiment to further illustrate its importance. Specifically, we use our generated image-based interventional data to train a linear-probe on the DomainBed dataset. The experimental results are summarized as follows. | Method | PACS | VLCS | OfficeHome | TerraInc | DomainNet | AVG. | | :---------------------------------- | :------: | :------: | :--------: | :------: | :-------: | :------: | | Linear Probe | 96.4 | 78.7 | 81.9 | 60.2 | 55.0 | 74.4 | | Linear Probe + Interventional data | 96.8 | 79.3 | 82.3 | 60.5 | 55.8 | 74.9 | | CLIP-ICM$^*$ + Linear Probe | **97.5** | **86.5** | **84.6** | **64.3** | **64.0** | **79.0** | From the results, we can observe that: 1. Incorporate image-based interventional data with linear probe only slightly (0.5%) improve the performance of linear probe. 2. Despite all incorporate image-based interventional data for training, the performance of CLIP-ICM$^*$ + Linear Probe is significantly better than that of Linear Probe + Interventional data. These findings demonstrate that our proposed $A_{inv}$ module (i.e., the projection to the invariant subspace) indeed improve the performance of CLIP in OOD scenarios. ### W2: Regarding the source of variance in Tables 2 and 3, each value in Tables 2 and 3 represents the mean and standard deviation over 5 runs with different random seeds. According to the standard evaluation protocol of the DomainBed Benchmark, we believe that the primary source of variance originates from the various splits of the training, validation, and test datasets across multiple runs.
null
null
null
null
null
null
Diss-l-ECT: Dissecting Graph Data with Local Euler Characteristic Transforms
Accept (poster)
Summary: In their paper "Diss-l-ECT: dissecting graph data with local Euler characteristic trasnform", the authors suggest a local version of Euler characteristic transform (ECT) that, given a graph with node features, assigns to each node an additional feature vector containing Euler characteristics of local subgraphs with nodes selected by a range of feature-based thresholds. The authors then show that XGBoost on the concatenation of original features and l-ECT features performs very well in various node clasification tasks, in particular for heterophilic graphs. Claims And Evidence: One important baseline is missing. Methods And Evaluation Criteria: Yes Theoretical Claims: Didn't check Experimental Designs Or Analyses: Seems sensible Supplementary Material: Skimmed it Relation To Broader Scientific Literature: Not sure what one is supposed to say here. This paper suggests l-ECM, clearly building up on ECM etc. Essential References Not Discussed: Not that I know of Other Strengths And Weaknesses: Strengths: The paper is very clearly written. l-ECT definition makes sense. Experimental results are convincing. Weaknesses: No intuition about why/when l-ECT works. No baseline comparison with XGBoost on the node features. Overall, borderline paper but I tend to acceptance. MAJOR COMMENTS * The main argument of the paper is that local Euler characteristics can contain information useful for node classification. What would be very helpful is to have some illustration of how this can be the case. Can you show an example of a small toy graph with nodes from two classes and some node features, where local Euler characteristic in 1-hop neighborhood predicts the class? Currently, the paper provides very little intuition for why/when this should work. The authors emphasize heterophilic graphs, so it would be great if the toy graph is heterophilic. For a graph, Euler characteristic is simply the number of nodes minus the number of edges. Maybe the toy graph can have 2D node features and a particular 2D direction and a particular threshold value that would make 1-dimensional l-ECT (m=l=1) perfectly predictive of class? Something along the lines. * In every table I am missing one IMHO crucial comparison: XGBoost performance on node features without any l-ECT features. Please add this to Tables 1--5 as additional row. Currently it is impossible to say if l-ECT performance is due to XGBoost on original node features performing well or due to l-ECT features actually providing some additional information. If for some of the graphs the performance of XGBoost on original features is as good as for l-ECT, it would mean that l-ECT features are useless for that graph. * I did not understand the point of section 5.2. Randomly rotated graphs shown in Figure 1 could be aligned using a trivial application of Procrustes rotation. Why does one need l-ECT machinery for that? Is this whole section supposed to be a sanity check that l-ECT alignment works correctly? If so, it should be presented as a "merely" sanity check, and not as a important result that takes 1 page of text and 1 figure. The result seems trivial. Other Comments Or Suggestions: MINOR COMMENTS * On page 2, simplicial complex is defined abstractly, without any node features. Then page 3 starts with simplicial complex X \in R^n. But what does it mean for X to be in R^n? This was never defined. Featured graphs and feature vectors are only introduced later. This makes it confusing for the reader. * Several times the authors refer to "point clouds", even calling them an "important special case". I don't understand how l-ECTs would be applicable to point clouds, where the (local) Euler characteristic is just the number of points. This seems to carry no useful information. Please clarify what is meant. * In section 5.1, for all graphs please specify where they are heterophilic or homophilic (e.g. WebKB datasets are homo- or hetero- philic? it's not stated). Perhaps use some measure of homophily and report the value for each graph. * "Amazon dataset" paragraph: "combintaion of l-ECT1 and l-ECT2 outperforms on Photo" -- this statement is misleading, the difference is 0.1% with errors being ~1%, clearly not significant. In fact, I think GAT should be bold for Photo in Table 3, because it's not meaningfully different from the top value in the Photo column. I suggest to use bold in all such cases in all tables (e.g. l-ECT2 for Amazon Ratings in Table 2 etc.) * "Planetoid datasets" -- what is "planetoid"? * I did not understand Table 7. Does it take ranks from Platonov et al.? If yes, how are your methods added there? Also, is H2GCN the best or the worse? It has the highest rank value, meaning it's the worst? As in Figure 4 (higher value => worse). If FSGNN is the best, why did not you use it in your benchmarking? If H2GCN is the best, then why did you find it so much worse then l-ECT in Figure 4? Very confusing. Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, we thank you for the constructive and thoughtful feedback. We’re especially grateful for recognizing the clarity of our exposition, the soundness of our method, and the convincing experimental results. Below, we respond to all concerns. > Question regarding illustration and intuition Thank you for the helpful suggestion. To clarify, we are not computing the local Euler characteristic, but rather the local Euler Characteristic Transform (l-ECT). Unlike the raw Euler characteristic, l-ECT scans the local neighborhood in multiple directions, computing Euler characteristics of sublevel sets. This directional process, known to be invertible under mild assumptions, provides a rich fingerprint of local geometry and topology. While the Euler characteristic is a topological invariant, the collection of directional profiles captures detailed geometric structure. **We'll add more explanations and illustrations in the revision.** > “No intuition about why/when l-ECT works. No baseline comparison with XGBoost on the node features.” Regarding intuition, see the response above; we also plan on illustrating the computation better. Regarding the baseline, we’re happy to include an **additional comparison against XGBoost** for the respective node classification tasks. > Request for XGBoost on raw node features We agree that this baseline is important and are happy to **include the corresponding results in our revision.** In the meantime, Table 6 already shows that l-ECT features are informative: as the number of directions used in the transform is reduced, classification performance often degrades substantially. This suggests that the l-ECT encodes nontrivial structural information not already present in the raw features, and that this information contributes meaningfully to the model’s performance. > Question about spatial alignment using l-ECT Thank you for raising this. While the experiment in Section 5.2 does serve as a sanity check, it also demonstrates that l-ECT provides a scalable alternative to alignment methods like Procrustes, which rely on costly SVD. Our approach scales linearly with n, l, and m, and allows control over approximation quality via m and l. **We'll revise the text to better explain this motivation and highlight the practical benefits of l-ECT-based alignment.** > Question on the definition of simplicial complexes You're right—thanks for catching this. We meant geometric, not abstract, simplicial complexes, and **will revise the text to reflect this consistently.** > Question on l-ECT and point clouds To clarify, we don’t compute a single Euler characteristic, but use the local Euler Characteristic Transform (l-ECT), which builds a vector from directional filtrations of the local neighborhood. Point clouds are treated as edge-free geometric graphs (0D complexes in R^n. Since the ECT is invertible, it serves as a rich fingerprint of local structure (in this case the point cloud). **We’ll revise the text to make this clearer.** > Request of homophily score reporting Thank you for the suggestion. We will add a standard homophily measure (edge homophily ratio) for each dataset and indicate whether each dataset is homophilic or heterophilic. For example, the WebKB datasets exhibit low edge homophily (~0.2) and are thus considered heterophilic, while citation graphs like Cora are homophilic (e.g., ~0.8). > Request for using bold for all results within error margins Thank you for pointing this out. **We will update all result tables to use boldface for all values within one standard error of the best performance.** We believe this makes the comparisons clearer and more meaningful. > “"Planetoid datasets" -- what is "planetoid"?” “Planetoid” refers to the data introduced in the benchmark (Yang et al., 2016), containing the citation graph datasets Cora, Citeseer, and Pubmed. The dataset is introduced in l.353 of our manuscript. **We will clarify this.** > Questions about Table 7 Thank you for pointing this out. Table 7 uses the ranking setup from Platonov et al. (2023), with lower mean ranks indicating better performance. We added l-ECT post hoc using the same protocol. FSGNN, while strong, was excluded from our benchmarking due to its specialized design. As the comparison focuses on heterophilic graphs, such methods are favored—but our goal is to show that l-ECT performs well without architectural tuning, **which we’ll clarify in the revision.** **We are grateful for your recognition of our contributions and your close reading of the manuscript.** If the improvements we propose address your concerns, we would be very grateful if you would consider raising your score. Thank you once again for your constructive and thoughtful review, we are happy to address any other questions you might have! Best regards, The Authors --- Rebuttal Comment 1.1: Comment: Thank you for your response. I can see that all 4 reviewers gave the identical score of 3, so the paper has a good chance of getting in. I also have to say that I was a bit surprised that the authors did not do any single new experiment for their rebuttals... Personally I would like to keep the score at 3 -- I continue to recommend acceptance, but only weakly. > To clarify, we are not computing the local Euler characteristic, but rather the local Euler Characteristic Transform (l-ECT). Unlike the raw Euler characteristic, l-ECT scans the local neighborhood in multiple directions, computing Euler characteristics of sublevel sets. Yes, I understand that. > We agree that this baseline [XGBoost on raw node features] is important and are happy to include the corresponding results in our revision. In the meantime, Table 6 already shows that l-ECT features are informative: as the number of directions used in the transform is reduced, classification performance often degrades substantially. Thank you for pointing me to this supplementary table. For 3 datasets out of 5 there is almost no difference there between the min and the max number of directions. So it remains possible that on these datasets XGBoost on raw features would perform as well. But I can appreciate that on 2 datasets there is indeed a pronounced difference. > While the experiment in Section 5.2 does serve as a sanity check, it also demonstrates that l-ECT provides a scalable alternative to alignment methods like Procrustes, which rely on costly SVD. Well... SVD is not that costly. For the tiny graphs in Figure 1 the cost is negligible. If you want to seriously claim that your procedure is faster than Procrustes rotation for very large graphs, then it should be directly demonstrated using runtime experiments. Otherwise it's fine with me if Section 5.2 gets reformulated as a sanity check. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for the clarification and the thoughtful follow-up. We fully understand your expectation that a rebuttal might also include new experimental results, and we appreciate you sharing this perspective. To complement our earlier conceptual, theoretical, and methodological responses, we are now able to share first results from ongoing experiments: * **GraphSAGE** consistently underperforms compared to our l-ECT-based method. For instance, on Computers we observe ~91.0%, and on Photo ~93.2%, both notably lower than our reported results. * **XGBoost trained solely on the raw features** performs considerably worse, e.g., ~86.6% on Computers (vs. ~92.2%) and ~92.1% on Photo (vs. ~94.9%). * **Additional preliminary node classification results** also confirm the strength of our method, exhibiting accuracies up to the following: | Setting | Minesweeper | Tolokers | Questions | |------------------------|-------------|------------|------------| | l-ECT_1 + l-ECT_2 | 0.8699 | 0.8309 | 0.7639 | | l-ECT_1 | 0.8587 | 0.7936 | 0.7970 | | l-ECT_2 | 0.6233 | 0.8396 | 0.7572 | More experiments (those requested by Reviewers vCjY and DDk2) are currently running, and we are fully committed to incorporating all results into the camera-ready version. Finally, on Section 5.2: you're right—Procrustes is negligible for small graphs. Our aim was to highlight its impact in large or repeated alignment scenarios, but we’ll revise the section to better reflect this and make its role as a sanity check more explicit. We are grateful for your fair and constructive review. If the additional experiments and clarifications we’ve provided align with your expectations, we would kindly ask you to consider raising your score. Your support would make a meaningful difference at this stage. Best regards, The Authors
Summary: The paper introduces a local Euler Characteristic Transform (l-ECT), a local topology measure. l-ECT is an application of ECT for analysis of a neighborhood. Then, author apply it to enhance expressivity and interpretability of graph representations (mostly graph's nodes classification). Authors identify crucial issues where GNN fail while l-ECT are useful. A rotation invariant l-ECT is provided. Theoretical guarantees are provided. Claims And Evidence: Main claims are: > As our main contributions, we (i) construct l-ECTs in the context of embedded simplicial complexes (and graphs) and theoretically investigate their expressivity in the special case of featured graphs, (ii) empirically show that this expressivity positions l-ECTs as a powerful general tool for interpretable node classification, often superior to standard GNNs, and (iii) introduce an efficiently computable rotation-invariant metric based on l-ECTs that facilitates the spatial alignment of geometric graphs. Claims are mostly convincing, except the "interpretability". The examples of interpretability are not provided. The details on evaluation of rotation-invariant metric is not provided. How do you perform differentiation since the Euler characteristic is not differentiable? Methods And Evaluation Criteria: Benchmarks are fine. The performance of l-ECT on WebFB dataset is better that on common Planetoid. Do you have an explanation? Theoretical Claims: Yes, I have checked most of the proofs. Experimental Designs Or Analyses: Experiments are sound. See questions above. Supplementary Material: I have reviewed proofs of the theorems. Relation To Broader Scientific Literature: Authors cite broader scientific literature, for example SOTA studies on topological deep learning and learning with hetero-graphs. Essential References Not Discussed: -- Other Strengths And Weaknesses: **Strengths** 1. The paper introduced a novel concept, l-ECT and empirically proved its usefulness for node classification in graphs. **Weaknesses** 1. The proposed method outperformed standard GCN, GAT, GIN but is still worse that modern methods dedicated to hetero-graphs (see Appendix). 2. If I understood correctly, l-ECT method is not applicable for graphs without features associated with nodes. Other Comments Or Suggestions: 1. The definition of a simplical complex is quite clumsy mixing together geometrical and abstract simplical complexes. 2. Hyperlinks to proofs of theorems in Appendix is not provided in main text, 3. I assume that l-ECT is defined for a __geometric__ simplical complex, otherwise the definition (3) doesn't make sense. Why you still use term "abstract simplicial complex" in line 113? Consequently, l-ECT makes sense for graphs only if each vertex has attributes and a graph is mapped onto R^n. If I am right, this shoud be stated more directly. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, we sincerely thank you for the thoughtful and detailed feedback! **We are glad that you found our contributions novel and our empirical and theoretical results sound.** We especially appreciate your engagement with the proofs and your recognition of l-ECT’s usefulness for node classification. Below, we address your comments point by point. > The examples of interpretability are not provided. Thank you for raising this point. A discussion on the interpretability of l-ECT is included in Appendix A.2.1, where we show how feature importance of the underlying model naturally gives rise to interpretability. However, we agree that the main paper could more clearly showcase this aspect. **In the revised version, we will better illustrate how l-ECT enables interpretation of node roles and neighborhood structures.** > The details on evaluation of rotation-invariant metric is not provided. Section 5.2 evaluates our rotation-invariant metric for aligning spatial graph representations, showing it captures structural similarity under random rotations. **We’ll clarify this purpose in the text and add more details.** > How do you perform differentiation since the Euler characteristic is not differentiable? The Euler Characteristic Transform is made differentiable by embedding it into a continuous pipeline using soft (smoothed) filtrations. It doesn’t change the discrete nature of EC itself but creates a differentiable approximation around it. **We will clarify this in our revision.** > The performance of l-ECT on WebFB dataset is better that on common Planetoid. Do you have an explanation? The stronger performance of l-ECT on WebKB over Planetoid datasets reflects structural differences: WebKB graphs are heterophilic, where local structure matters more than feature similarity—making l-ECT well-suited—while Planetoid graphs are homophilic, favoring message-passing GNNs. **We’ll clarify this in the revision.** > The proposed method outperformed standard GCN, GAT, GIN but is still worse that modern methods dedicated to hetero-graphs (see Appendix). We agree that recent heterophily-specific methods are highly competitive on heterophilic benchmarks. However, our goal with l-ECT is not to design a specialized architecture optimized solely for heterophily, but rather to propose a general-purpose, interpretable representation that captures rich local structural information without message passing. What’s notable is that l-ECT achieves competitive performance despite being a non-aggregating and model-agnostic approach, and in many cases closes the gap with specialized models. We believe this highlights the complementary strengths of topological representations, particularly when combined with other methods. **We will emphasise this aspect further in our revision and substantiate it using additional experiments.** > If I understood correctly, l-ECT method is not applicable for graphs without features associated with nodes. You're right—l-ECT, as formulated, assumes node features or geometric embeddings. This is a common assumption in graph learning, where such features are typically available and important for downstream tasks like node classification. Our method therefore aligns well with standard benchmarks and real-world settings. **We'll clarify this assumption in the paper.** > The definition of a simplical complex is quite clumsy mixing together geometrical and abstract simplical complexes. Thank you for pointing this out—we completely agree. Our current presentation conflates geometric and abstract simplicial complexes. Since our method relies on geometric realizations, we’ll restrict the discussion to geometric simplicial complexes in the revised version. **We’ll clarify the terminology and focus solely on the relevant setting for l-ECT.** We appreciate your attention to this—it helps improve the clarity and precision of the exposition. > Hyperlinks to proofs of theorems in Appendix is not provided in main text Thanks for this suggestion. **In the revised version, we will include hyperlinks and explicit references from each theorem in the main text to its corresponding proof in the appendix.** > I assume that l-ECT is defined for a geometric simplical complex, otherwise the definition (3) doesn't make sense. Why you still use term "abstract simplicial complex" in line 113? Consequently, l-ECT makes sense for graphs only if each vertex has attributes and a graph is mapped onto R^n. If I am right, this shoud be stated more directly. This is a mistake, please see response above. **We will fix this in our revision.** We are grateful for your recognition of our contributions and your close reading of the manuscript. If the improvements we propose address your concerns, we would be very grateful if you would consider raising your score. Thank you once again for your constructive and thoughtful review, we’d be happy to address any other questions you might have. Best regards, The Authors --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. I hope that the clarifications and an explanation how l-ECT is differentiated will be included in the camera-ready version. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. We're glad the clarifications were helpful and will ensure that the final version includes the necessary explanations and improvements.
Summary: The paper introduces the Local Euler Characteristic Transform (l-ECT), a novel approach for graph representation learning that extends the Euler Characteristic Transform (ECT) to local neighborhoods. The key innovation is capturing local structural information in graphs without relying on conventional message-passing aggregation schemes used in Graph Neural Networks (GNNs). The authors theoretically show that l-ECTs can preserve local neighborhood information without loss and provide a rotation-invariant metric for spatial alignment. They empirically demonstrate that l-ECT-based approaches outperform standard GNNs on various node classification tasks, particularly on heterophilous graphs where traditional aggregation methods struggle. ## update after rebuttal After checking rebuttal from different reviewers, I incline to keep my score. Claims And Evidence: Most claims are supported by theoretical analysis and empirical results. Methods And Evaluation Criteria: The proposed methods are generally sound: #### 1. Using l-ECTs to capture local neighborhood information is a novel approach that addresses limitations in traditional GNNs. #### 2.The evaluation on both homophilous and heterophilous graph datasets is appropriate. #### 3. The comparison with standard GNNs (GCN, GAT, GIN) and a heterophily-specific model (H2GCN) provides good context. Theoretical Claims: Most claims are supported by theoretical analysis and empirical results. Experimental Designs Or Analyses: The experimental design is generally sound. Supplementary Material: I checked the code but only read part of that. Relation To Broader Scientific Literature: I am not sure about that. Essential References Not Discussed: LINKX (Lim et al., 2021) - A method that also bypasses message passing for heterophilous graphs WRGAT (Suresh et al., 2021) - Addresses heterophily with rewiring techniques ACM-GCN (Luan et al., 2022) - Reports state-of-the-art results on heterophilous benchmarks Papers on graph topological operators like GraphGPS (Rampavsek et al., 2022) Other Strengths And Weaknesses: ### Strengths: #### 1. Novel integration of topological data analysis techniques with graph learning #### 2. Theoretically grounded approach with provable properties #### 3. Strong performance on heterophilous graphs without specialized architecture ### Weaknesses: #### 1. Limited exploration of computational efficiency - l-ECT computation could be expensive for large graphs #### 2. Relationship between l-ECT parameters and performance is not thoroughly analyzed #### 3. Limited comparison to recent heterophily-specific approaches Other Comments Or Suggestions: Actually, I think the experiments should pay more attention to the special properties the proposed method can demonstrate. I don't think heterophilous graph is a good target. Questions For Authors: 1. How does the computational complexity of computing l-ECTs scale with graph size and neighborhood size? This seems critical for practical applications. 2. Could you clarify your notion of graph embedding using node features? How do you handle cases where the embedding might not preserve edge relationships? 3. What is the theoretical justification for l-ECTs working well on heterophilous graphs beyond the empirical results? 4. How do you address the scenario where nodes have identical feature vectors but different structural roles in the graph? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, we sincerely thank the reviewer for the careful and thoughtful evaluation of our work. We appreciate your recognition of our contributions, particularly the novelty of integrating topological data analysis into graph representation learning, and your acknowledgment of the soundness of our theoretical and empirical work. Below, we address your comments and questions in detail. > Limited exploration of computational efficiency—l-ECT computation could be expensive for large graphs. For a fixed node $x$, the computational complexity of its $\ell ECT_k(x)$ is: $$ O\big(m \cdot l \cdot |N_k(x)|\big), $$ where: - $m$ is the number of sampled directions, - $l$ is the number of filtration steps, - $|N_k(x)|$ is the number of vertices (or simplices) in the $k$-hop neighborhood of $x$. In other words, the complexity scales linearly with neighborhood size and sampling resolution. In practice, as shown in Appendix A.2.1, even a small subset of directions suffices for high accuracy, reducing runtime *considerably*. For large graphs, parallelization of l-ECT computation provides scaling of the procedure. **We will include a thorough discussion on computational complexity and improved implementations in our revision.** > Relationship between l-ECT parameters and performance is not thoroughly analyzed In Appendix A.2.1, we provide an ablation study on the number of sampled directions, showing how performance varies when using a smaller portion of directions. That said, we agree that further exploration with respect to other parameters such as the number of filtration steps l would be valuable. **We are happy to include these additional ablations in the revised version to provide a more complete picture of the trade-offs involved!** > LINKX (Lim et al., 2021) - A method that also bypasses message passing for heterophilous graphs WRGAT (Suresh et al., 2021) - Addresses heterophily with rewiring techniques ACM-GCN (Luan et al., 2022) - Reports state-of-the-art results on heterophilous benchmarks Papers on graph topological operators like GraphGPS (Rampavsek et al., 2022) We appreciate the references and agree these are valuable works that provide additional context. **We will discuss these references in our revised manuscript for an additional comparison!** > Actually, I think the experiments should pay more attention to the special properties the proposed method can demonstrate. This is an insightful observation. Our motivation for including heterophilous benchmarks was to highlight the method’s robustness in the absence of homophily, where traditional GNNs often degrade. We agree that l-ECT’s real strength lies in capturing local structure independent of homophily assumptions, yield broad utility. **In the revision, we will reframe the narrative to better emphasize the general-purpose nature of l-ECT.** > How does the computational complexity of computing l-ECTs scale with graph size and neighborhood size? This seems critical for practical applications. Please refer to the response above. > Could you clarify your notion of graph embedding using node features? How do you handle cases where the embedding might not preserve edge relationships? The embedding is created by assigning a node to the point in Euclidean space which is specified by its respective node feature vector. Edges are drawn between nodes if and only if there's an edge in the original graph. Since nodes may appear multiple times, edges might get lost by using this embeddings (see the paragraph before Thm.1). Therefore, in practice we use virtual edges to prevent such cases (i.e. an embedded node may occur multiple times as a neighbor). **We will clarify this subtlety in our revised paper, thanks for spotting that!** > What is the theoretical justification for l-ECTs working well on heterophilous graphs beyond the empirical results? The core limitation of message passing in heterophilous settings lies in its aggregation mechanism, which tends to blur dissimilar feature signals. Our method bypasses aggregation entirely, instead constructing a vectorized topological summary of a node’s neighborhood. From a theoretical standpoint, Thm. 1 shows that l-ECT_1 encodes sufficient information to reconstruct the feature vectors of a node’s neighbors without aggregation. This property is critical in heterophilous settings where the difference between features (not their average) is informative. > How do you address the scenario where nodes have identical feature vectors but different structural roles in the graph? Please see our response above. **We appreciate your detailed review and the encouragement regarding our theoretical and empirical contributions.** If our revisions adequately address your concerns, we would be very grateful if you would consider raising your overall score. Thank you again for your time and constructive feedback! Please let us know if you have any other questions or concerns. Best regards, The Authors
Summary: The paper introduces a novel method called the Local Euler Characteristic Transform (ECT), designed to enhance graph representation learning by preserving critical local structures while maintaining global interpretability. ECT provides a lossless representation of local neighborhoods around graph nodes. This method, grounded in topological principles, uses Euler characteristic transformations to capture both structural and spatial information around each node, making it particularly useful for tasks like node classification. The authors argue that ECT can overcome limitations found in GNNs, especially in graphs with high heterophily, where aggregating neighboring information could obscure crucial differences. The paper also introduces a rotation-invariant metric based on ECT, which helps in spatially aligning data spaces, offering a practical advantage in comparing graph data. Claims And Evidence: Claims made in the submission are clear and convincing. Methods And Evaluation Criteria: The Texas and Wisconsin datasets are relatively small, and previous studies [1] have indicated that the Chameleon and Squirrel datasets contain significant amounts of duplicate data, which undermines their reliability. So the above datasets are not suitable for evaluating heterophilic GNNs any more, and I suggest that authors should consider more datasets proposed by [1] such as minesweeper, tolokers, questions. [1] Platonov et al. A critical look at the evaluation of GNNs under heterophily: Are we really making progress? ICLR 2023. Theoretical Claims: I am not sure the correctness of the proposed proofs for the theorems. Experimental Designs Or Analyses: More heterophilic GNN baselines such as spectral-based models [2] [3] [4] and spatial-based models [5] [6] should be considered in the section 5. Besides, GraphSAGE is also a powerful baseline for heterophilic graphs. [2] Bo et al. Beyond low-frequency information in graph convolutional networks. AAAI 2021. [3] He et al. BernNet: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation. NeurIPS 2021. [4] Luan et al. Revisiting Heterophily For Graph Neural Networks. NeurIPS 2022. [5] Li et al. Finding global homophily in graph neural networks when meeting heterophily. ICML 2022. [6] Want et al. Powerful graph convolutional networks with adaptive propagation mechanism for homophily and heterophily. AAAI 2022. Supplementary Material: I briefly read all the supplementary material. Relation To Broader Scientific Literature: This paper builds on limitations of losing local information by introducing the Local Euler Characteristic Transform (ECT), a method inspired by topological data analysis, which provides a topological fingerprint of local graph neighborhoods. The use of Euler characteristic transformations is rooted in previous works like Turner et al. (2014), who utilized Euler characteristics for global shape classification. However, unlike prior work, which focuses on global properties, this paper introduces a local variant, ECT, that preserves detailed neighborhood information while still maintaining global interpretability. This extension is crucial, especially for graphs with high heterophily, where local node characteristics may be more important than aggregated neighbor features. Essential References Not Discussed: Please refer to my comment in Experimental Designs Or Analyses section. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written, and the motivation behind the proposed approach is sound and clearly articulated. 2. The idea of extending the Euler Characteristic Transform to capture local graph structures is novel and intriguing. Weaknesses: 1. Please refer to my comments in the sections above. 2. In Table 7 of Appendix A2.3, all the listed models should be accompanied by their corresponding references. Other Comments Or Suggestions: NA. Questions For Authors: NA. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, we sincerely thank you for the thoughtful and constructive feedback. We are pleased that you found our paper to be well-written and our proposed method—the Local Euler Characteristic Transform (l-ECT)—to be novel and well-motivated. Below, we address your main concerns point-by-point and outline the changes we will make in the revised version. > The Texas and Wisconsin datasets are relatively small, and previous studies [1] have indicated that the Chameleon and Squirrel datasets contain significant amounts of duplicate data, which undermines their reliability. So the above datasets are not suitable for evaluating heterophilic GNNs any more, and I suggest that authors should consider more datasets proposed by [1] such as minesweeper, tolokers, questions. [1] Platonov et al. A critical look at the evaluation of GNNs under heterophily: Are we really making progress? ICLR 2023. Thank you for pointing this out. We agree with your assessment of Chameleon and Squirrel and appreciate the recommendation. While our focus was to provide a broad comparison with commonly used benchmarks in heterophilic GNN evaluation, we recognize the importance of newer, cleaner datasets. **In the revised version, we will include results on datasets like Minesweeper, Tolokers, and Questions as proposed in [1].** > More heterophilic GNN baselines such as spectral-based models [2] [3] [4] and spatial-based models [5] [6] should be considered in the section 5. Besides, GraphSAGE is also a powerful baseline for heterophilic graphs. [...] We appreciate the reviewer’s recommendation and agree that these are strong and relevant baselines in the context of heterophily-specific architectures. However, we wish to emphasize that our approach is not only proposed as a specialized architecture for heterophilic graphs, but rather as a general-purpose representation method that provides a robust and interpretable alternative to neighborhood aggregation. Our focus is on illustrating the effectiveness of l-ECT as a complementary or standalone representation, rather than competing directly with task-specific model architectures. That said, we agree that GraphSAGE is a widely adopted and illustrative baseline for generalization across homophilic and heterophilic settings. **We will include GraphSAGE in our revised experiments, and clarify the scope of our contribution with respect to architecture design for heterophilic graphs.** > In Table 7 of Appendix A2.3, all the listed models should be accompanied by their corresponding references. Thank you for spotting this oversight. **We will revise Table 7 to include full citations for each model.** We thank the reviewer again for acknowledging the **novelty, motivation, and clarity of our work.** We believe that the l-ECT offers a fundamentally new perspective on graph representation learning—particularly by preserving geometrical-topological information at the local scale, a signal that is often diluted by aggregation-based models. If our revisions adequately address your concerns, we would be grateful if you would consider raising your overall score. We are happy to address any additional questions. Best regards, The Authors
null
null
null
null
null
null
A Parametric Contextual Online Learning Theory of Brokerage
Accept (poster)
Summary: This paper studies brokerage as a contextual online learning problem. Under the assumption that traders' valuations depend linearly on a context available to a broker, the authors design an algorithm achieving a regret bounded by sqrt T. They also derive a corresponding lower bound. The paper then considers the full information feedback, under which an improvement of algorithm 1 leads to a lnT bound on the regret. Here again, a corresponding lower bound is derived. Finally, the authors stress the importance of the bounded density assumption by showing that without this assumption, it is possible to build an instance for which the regret incurred by any algorithm grows linearly with the horizon. Claims And Evidence: Every claim made in the paper is supported by formal arguments. The authors are transparent regarding the scope of their results. Methods And Evaluation Criteria: I fell that experiments are lacking. Given the simplicity of the setting, implementing Algorithm 1 in a simulated environment should not be too challenging. Specifically, comparing the performance of Algorithm 1 in a simulated environment with that of Gaucher (2024) would further support the claim that this method is superior. Theoretical Claims: I went rapidly through proofs in the main text and I haven’t spotted any blatant problem. Experimental Designs Or Analyses: Not relevant. Supplementary Material: I haven’t thoroughly reviewed the supplementary material. Relation To Broader Scientific Literature: The literature review is excellent. Even though I have no expertise in the specific brokerage problem, I found it easy to understand how this paper relates to existing studies. In particular, the authors compare their results to the state-of-the-art and clearly explain how their approach improves upon previous work. Essential References Not Discussed: I do not think about any essential reference overlooked by the authors. I would only point out that the sentence “For these reasons, the techniques appearing in contextual linear bandits do not directly translate to our problem.“ (line 216–218) seems a bit too strong. While I understand that gain-from-trade is not linear in the context, it is still piecewise linear, and the proofs in the paper rely a lot on the linear contextual bandit machinery. Likewise, the 2-bits feedback does not look very different from what is considered in duelling and threshold bandits. Other Strengths And Weaknesses: strengths: S1. the paper is very clear and well written. The assumptions and the results are clearly stated. I enjoyed reading this work. S2. The results are interesting and improve over the known state-of-the-art for this particular problem. S3. Each upper-bound on regret is supported by a corresponding lower bound, showing that the proposed algorithms achieve almost optimal performances. weaknesses: W1. I believe adding experiments would significantly improve the paper, particularly in supporting the claim that this approach outperforms that of Gaucher et al. (2024). An actual implementation seems even more necessary given that the paper aims to address a practical problem, and the reward environment appears to be relatively straightforward. W2. The paper makes the underlying assumption that traders are truthful when revealing their willingness to trade to the broker, both in the 2-bits and the full information setting. This seems quite unlikely for actual traders to be so passive and non-strategic. Under-reporting or over-reporting valuations in each period so as to bias the estimate \hat{\phi} seems like a simple strategy to increase gains for traders. This game-theoretic aspect is totally left aside. Other Comments Or Suggestions: I do not have other comments. Questions For Authors: Q1. Regarding W2, can you discuss why discarding the strategic aspect of your problem makes sense? Ideally, one would want to prove that being truthful is a dominant strategy, or sustainable as a Nash equilibrium. If proving such a result is impossible, it should at least be discussed in the paper. Q2. As argued in W1, it seems easy to implement an experiment to back the theoretical results of the paper. EDIT: The authors answered both my questions in a convincing way. I changed my recommendation to Accept. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. **Essential References** After reviewing the submission in light of the reviewer remarks, we agree that the mentioned statements could be weakened a bit. We are happy to make the requested changes in the revised version. **Q1** Great question! One could indeed think of the traders in our model as "impatient". They arrive sequentially and permanently exit if unable to (or after completing a) trade immediately. Under such a sequence of "one-shot" interactions, misreporting valuations offers no future strategic advantage, making truthfulness naturally incentive-compatible. Similar "single-shot participation" modeling assumptions are not uncommon in the bilateral trade literature (Myerson and Satterthwaite, 1983; Blumrosen and Mizrahi, 2016; Colini-Baldeschi et al., 2020; Cesa-Bianchi et al., 2021), as they simplify analyses without sacrificing practical relevance (think of large markets where the broker interacts with new traders every day, like future markets, where the median trader participates in at most 4 trades before leaving the market forever; Ferko et al., Retail Traders in Futures Markets, 2024). Still, the reviewer might wonder: "Why not even attempt to systematically analyze recurring traders?" The answer is that the problem wouldn't simply become harder, but unlearnable. The proof of Theorem 5.2 implies, as a corollary, that learning is impossible if valuations are chosen strategically. In fact, it gives the even stronger result that learning is impossible even against *oblivious* adversaries (i.e., when valuations are deterministic sequences of numbers in $[0,1]$ that are fixed ahead of time and don't vary as a function of the learner's actions). That said, our model still captures some degree of strategic behavior, as we discuss in our answer to Reviewer AKyE regarding Assumption 1.2. **Q2** Good question. Please note that the algorithm SBIP in Gaucher et al. requires additional assumptions to guarantee their stated regret upper bound. In particular, the noise distribution for the seller is assumed i.i.d. across times (while we can deal with time-changing noise distributions), and the same is true for the buyer. Furthermore, note that an algorithm for the classic bilateral trade problem needs to be adapted to the brokerage setting since, in the brokerage problem, sellers' and buyers' roles are not fixed. A way to do this is by interpreting the agent with the lower valuation as a seller and the agent with the higher valuation as a buyer. Hence setting $S_t = V_t \land W_t$ and $B_t = V_t \lor W_t$. However, by doing this, the independence between sellers' and buyers' valuations, as well as the assumption that these valuations are zero-mean perturbations of a linear function of the contexts, both present in the statement of the regret guarantees in Gaucher et al., are lost. This suggests that their algorithm might not yield sublinear guarantees in our setting. We validate this intuition by running the experiments the reviewer requested and comparing our regret with theirs. As we suspected, their algorithm applied to our setting suffers linear regret, while our Algorithm 1 grows at the theoretical rate of $O(\sqrt{T})$ we proved; see picture in anonymized link: https://i.ibb.co/MkXr19t4/plot.jpg. The experiments were run for time horizons $T = 1000, 2000, \cdots, 10^4$ for $20$ simulations for each time horizon. At these time steps, the algorithms are tuned by using the parameters prescribed by the respective theories for each specific time horizon, and the corresponding regret is plotted. The dimension $d$ is set to $10$, the noise is uniform in $[-1/4,1/4]$, the unknown vector $\phi$ is picked at random in the $d$-dimensional simplex, while contexts are an i.i.d. process drawn at random in $[1/4,3/4]^d$. We remain available for clarifications in case we missed something or if something needs further explanation. If our response convinces the reviewer, we'd kindly ask them if they would consider adjusting their score accordingly. --- Rebuttal Comment 1.1: Comment: I thank the authors for their high quality answer. Regarding Q1, I am convinced by the "one-shot" interaction setting, which seems practically relevant (Ferko 2024). Regarding Q2, I find that the experiment compelling and sufficient to demonstrate the superiority of the authors' approach. I changed my recommendation to "Accept" accordingly.
Summary: This paper considers the brokerage problem between traders for contextual online bilateral trade where in each round, two traders arrive and a context is revealed, then the broker reveals a price, then broker only observes whether the trade with the given price occurred and the identity of buyer and seller. Under some assumptions on (i) the connection between market value and the context, and (ii) the traders' private valuations, they provided the following results - A linear regret lower bound under no assumption on the density of valuation of traders - When the density of the valuation is bounded by $L$, they have the following results: - For the natural the 2bit information, they provided an algorithm with $\sqrt{LdT\ln{T}}$ regret upper bound and lower bound $\sqrt{Ldt}$ where $d$ is context dimension and $T$ is number of rounds - For the case where the broker additionally can observe the private valuations (full information), they provided an algorithm with $O(Ld \ln{T})$ and a matching lower bound. 1. An important conceptual contribution is a structural result in Lemma 2.1, which shows that under some conditions, the optimal price to post is the market value of the item. 2. At high-level, in the 2bit information setting, they provided an exploration-exploitation separated style algorithm that given a context, it either explores by uniformly selecting a price in $[0,1]$ to update the estimated $\hat{\phi}$ or exploits by selecting a price using the estimated $\hat{\phi}$ and context $c_t$, i.e. $P_t = c_t^T \hat{\phi}$. To prove the regret upper bound, they upper bound the number of exploration rounds and the regret in the exploitation rounds. 3. Note that the matching lower bound (up to logarithmic factor) shows that the exploration-exploitation separated algorithm is optimal. 4. Additionally, the lower bound in Section 5, shows that the assumption for bounded density for valuation is needed to achieve no regret in their setting. At a high-level, when the valuation is a probability mass function, then the optimal pricing is not necessarily the market value, which is shown in Example 5.1, and then using this idea, they showed that for a distinct context sequence, $c_1, \ldots, c_T$, it is possible to construct a sequence of valuations for $V_t, W_t$ such that the learner in all rounds is off by a constant from the benchmark because it does not know whether $V_t,W_t$ is either $V_{t,\theta=0}, W_{t,\theta=0}$ or $V_{t,\theta=1}, W_{t,\theta=1}$ Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: - I checked the proof of Theorem 5.2, except those tedious computations mentioned there. I think there is a minor typo in line 407, where $V_{t, \theta}= c_t^T \phi + \xi_t$ where I think it should be $\xi_{t,\theta}$, similarly for $\zeta_t$. But this typo doesn't impact anything. - I checked the high level of Theorem 3.1, although I didn't closely verify the part of the proof on the right-hand side of page 6. Experimental Designs Or Analyses: NA Supplementary Material: No Relation To Broader Scientific Literature: Since I'm not familiar with the area, I can't fully assess this, however, given the points mentioned in the paper, I think this could advance the knowledge of online brokerage problems. Especially because the contextual setting is relatively less explored. Essential References Not Discussed: I didn't find any Other Strengths And Weaknesses: Writing is very clear Other Comments Or Suggestions: no comments Questions For Authors: no questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reading the paper and for their kind words! Thanks also for spotting the typos ($\xi_{t} \rightsquigarrow \xi_{t,\theta}$ and $\zeta_{t} \rightsquigarrow \zeta_{t,\theta}$) on Line 407. We will correct them in the revised version. It is our understanding that no further clarification or comment is required from us at the moment, but of course, we remain available to provide them upon request.
Summary: This paper addresses the problem of sequentially determining transaction prices between two parties based on contextual information. Transactions occur, and rewards are obtained, only when the proposed price falls between the private valuations of the two parties. It is assumed that the expected values of these private valuations can be represented by unknown linear functions of the contextual information. The paper considers two feedback settings: a "2-bit feedback" setting, where only the occurrence of a transaction is observable, and a "full feedback" setting, where the actual valuations are observable. We characterize sufficient conditions to achieve favorable regret bounds, demonstrating that good regret bounds can be attained when the valuation distributions have bounded density functions. Conversely, we also show that, without this assumption, the regret can become linear in the worst case. Claims And Evidence: The contributions of this paper are theoretical, and all appear to be supported by correct proofs. Methods And Evaluation Criteria: The evaluation metric used in this paper (the regret defined in Section 1.1) is natural and reasonable. Theoretical Claims: I checked the correctness of Lemma 2.1, Corollary 2.2, Lemma 2.3, Theorem 3.2, and Theorem 4.1. Experimental Designs Or Analyses: N/A Supplementary Material: I reviewed Appendices A and B. Relation To Broader Scientific Literature: Although there are several known studies on the online brokerage problem in the context of online learning theory, research addressing contextual settings appears to be relatively scarce. This paper can be viewed as an extension of existing online brokerage problems to contextual scenarios. Many aspects of the modeling assumptions, algorithms, and analyses resemble elements found in existing literature on online brokerage problems, online learning, and bandit problems. Essentially, the approach can be viewed as a clever combination of existing techniques. Essential References Not Discussed: I am not aware of any particular relevant literature not discussed. Other Strengths And Weaknesses: The assumption that noise distributions have bounded densities might seem somewhat unrealistic in practical scenarios. For example, this assumption is not satisfied if valuations follow certain discrete distributions. I view this point as a potential weakness of the paper. Nevertheless, the authors have demonstrated that without this assumption, linear regret can occur in the worst case, thus providing evidence that this assumption is essential. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their review of our work and the comments on our submission. We are pleased to read that the reviewer evaluates positively the soundness of our setting, the correctness of our results, and our discussion of the relevant related literature. It is our understanding that no further clarification or comment is required from us at the moment, but of course, we remain available to provide them upon request.
Summary: The paper introduces the contextual version of the online brokerage problem. The broker observes a (possibly adversarially generated) context and sets a trading price. The buyer and seller whose private valuations are a perturbed linear function of the context agree for the trade if the price is between the lowest and highest private valuation. The objective function is the gain-from-trade which is the difference between the private valuations subject to the fact that the trade indeed occurs. The goal is to obtain sublinear regret under full feedback and two-bit feedback. The authors propose an algorithm that gets matching regret in both settings under smoothness assumption on the densities of the perturbation. In the unbounded case, the problem becomes unlearnable. Claims And Evidence: Yes, all the claims are well supported. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, I checked the proofs of all the theoretical claims and they seem fine. Experimental Designs Or Analyses: The paper has no experimental section. Supplementary Material: I reviewed proofs presented in appendices A and B. I summarily checked the lower bound proofs in Appendix C without going into too much detail. The lower bound proofs are mostly a reduction to the lower bound results presented in [BCR24] as mentioned by the authors. [BCR24] Natasa Bolić, Tommaso Cesari, and Roberto Colomboni. 2024. An Online Learning Theory of Brokerage. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS '24). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 216–224. Relation To Broader Scientific Literature: The paper sheds light on the contextual version of the online brokerage problem which is practically relevant (as brokers often have external information before setting the price). The learning algorithms in the paper are quite novel to the best of my knowledge. In the 2-bit feedback model, the algorithm obtains better regret bounds (sqrt(T) vs T^⅔) than the most recent work in this area [GBC+24] under a more relaxed setting which highlights the contributions of the paper. Although constructing the lower bound instance is quite intricate and involved, the paper mostly leverages the existing work of [BCR24]. [GBC+24] Gaucher, S., Bernasconi, M., Castiglioni, M., Celli, A., & Perchet, V. (2024). Feature-based online bilateral trade. arXiv preprint arXiv:2405.18183. Essential References Not Discussed: Connections to the literature on learning how to price in posted price mechanisms and second-price auctions should be discussed more thoroughly. In particular, several works have explored strategic manipulation in similar auction settings: "Dynamic Incentive-Aware Learning: Robust Pricing in Contextual Auctions" by Negin Golrezaei, Adel Javanmard, and Vahab Mirrokni was published in the Advances in Neural Information Processing Systems 32 (NeurIPS 2019) conference proceedings. This work examines robust pricing strategies in contextual second-price auctions where bidders behave strategically. Kareem Amin, Afshin Rostamizadeh, and Umar Syed. (2013). Learning prices for repeated auctions with strategic buyers. In Advances in Neural Information Processing Systems, pp. 1169–1177. This paper explores learning strategies in repeated auction settings where buyers act strategically, influencing the pricing dynamics. There are also earlier papers that studied learning in auction settings but did not account for strategic behavior. It would be helpful to clarify whether the authors borrowed any ideas or insights from this existing literature. While there are clear differences between the settings, the nature of the feedback in both cases is similar, which raises the possibility that techniques from the earlier work could have informed the proposed approach. Addressing these connections would strengthen the positioning of the paper within the broader research landscape. Other Strengths And Weaknesses: **Strengths:** The learning algorithms and its analysis are quite neat and intuitive. The authors obtain tight regret bounds for all the considered settings (up to log terms). **Weaknesses:** The assumptions are possibly too strong. See details in "questions for authors" section. Other Comments Or Suggestions: NA Questions For Authors: The proofs in the paper heavily rely on the two main assumptions. **Assumption 1.2**, in particular, raises some questions about its realism. This assumption states that traders' private valuations are zero-mean perturbations of the market value. However, if the market prices (which serve as the context) are adversarially generated, it seems inconsistent to assume that the traders' valuations would remain unbiased and follow a simple random pattern around the market price. In real-world financial markets, traders' valuations are influenced by complex factors such as market manipulation, asymmetric information, strategic behavior, and behavioral biases — all of which could introduce systematic deviations from the market price rather than just zero-mean noise. Furthermore, adversarial market prices suggest that the market environment itself is highly unpredictable and potentially influenced by strategic forces. Under such conditions, it seems natural to expect that traders' valuations would also reflect some degree of strategic adaptation rather than simple random perturbations. For example, traders might adjust their valuations based on market trends, competitor behavior, or even attempts to exploit the broker's algorithm. Therefore, it would be helpful if the authors could clarify the rationale behind this assumption and discuss whether relaxing it — for instance, allowing for adversarial or biased deviations in valuations — would significantly affect the theoretical results. Additionally, providing empirical or theoretical justification for why the perturbation model remains reasonable under adversarial contexts would strengthen the validity of the assumption and enhance the practical relevance of the model. 2- For a fixed $ \phi \in [0, 1]^d %, if the contexts $c_t \in [0, 1]^d $ are allowed to be adversarially generated, it’s not immediately clear how the inner product $m_t = \langle \phi, c_t \rangle$ would stay within the bounded range of \([0, 1]\). **The range of the inner product $\langle \phi, c_t \rangle $ is theoretically $[0, d]$ because both $ \phi $ and $ c_t $ are vectors of dimension $ d $ with entries in $[0, 1]$. **In the worst-case scenario, if all components of \( \phi \) and \( c_t \) are maximized at 1, the inner product would reach its upper bound of $d $, which exceeds the target range of $[0, 1]$. This raises two key questions: 1. **Bounding the Market Value:** If the goal is to model $m_t$ as a valid market value within $[0, 1]$, how is the inner product restricted to stay in this range when the contexts are adversarially generated? If the adversarial generation of contexts is unconstrained, the inner product could easily exceed 1. 2. **Impact on Learning and Regret Bounds:** If the inner product can exceed 1, the broker’s learning strategy and regret guarantees might break down, since the reward function and the regret definition are based on the assumption that the market value is bounded within $[0, 1]$. If the true range is wider, the theoretical analysis might require adjustments or additional constraints on the adversarial nature of the contexts. One way to address this could be to explicitly impose a normalization or projection step that ensures the computed market value remains in the interval $[0, 1]$ — for example, defining the market value as $ m_t = \min(1, \max(0, \langle \phi, c_t \rangle)). $ 3- Moreover, what are the assumptions on the random variables $\xi$ and $\zeta$ to ensure that the private valuations of the sellers and buyers are in $[0, 1]$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. **Additional references** We thank the reviewer for bringing the two additional references to our attention. We will add them to the revised version. Regarding one-sided problems (like auctions or dynamic pricing), no techniques we are aware of can be directly applied to or give clear insights into two-sided problems (like our brokerage problem) other than high-level online learning ideas (like constructing hard lower-bound instances by hiding slightly better actions in an appropriately constructed set of "hard-to-distinguish" actions). If the reviewer has any specific references in mind, we are happy to add them to the revised related works section. If they want us to add a concise part on related one-sided problems (like auctions or dynamic pricing), we can do that too. With **Assumption 1.2**, we aimed to capture the variability of individual preferences rather than systematic strategic behavior around a target quantity. The assumption is consistent with economic models where the "market price" is the notion that aggregates strategic behaviors, asymmetric information, and biases, so that deviations from the market price reduce to *unbiased* residual noise once strategic behaviors are reflected in the market price itself. In other words, the market price does not represent an asset's inherent value but the market participants' average opinion. Importantly, our theoretical results do not require identical distributions across time. The market's opinion can vary arbitrarily (even adversarially) over time, and the noise distributions too (representing periods where opinions are more or less aligned). The only concept that remains stationary is that the market participants' average opinion determines assets' "market values". We agree that altering this assumption by allowing systematic biases or strategic deviations around a notion of "inherent value" would be an interesting alternative path. We will mention this future research direction in the conclusions section and thank the reviewer for raising this point! **Assumption 1.1 (Market values and contexts)** is indeed an assumption on market values *and* contexts. A natural and common special case in which this assumption holds is when $|| \phi ||_1 = 1$, i.e., when $\phi$ is an unknown vector of weights belonging to the probability simplex, representing how important each component of the context vector $c_t$ is to determine the market value at round $t$ (with the boundary cases being $\phi_i = \mathbb{I}$ {$i=j$}, for some $j$, i.e., all the information necessary to reconstruct the market value is contained in a component $j$, or $\phi_i = 1/d$ for all $i$, i.e., all components are equally as informative). In this case, $\langle \phi, c_t \rangle \in [0,1]$ for all $t$. By Holder's inequality, this can be further generalized by assuming that $||\phi||_p \le 1$ and $||c_t||_q \le 1$ (where $q$ is the Holder conjugate of $p$), which is just another way of expressing boundedness of the vectors. Instead of fixing one of these specific assumptions, we opted for the most general case where only the property $\langle \phi, c_t \rangle \in [0,1]$ is assumed. Note that this assumption is merely for the sake of consistency (it is intuitive to model valuations and market prices all belonging to the same set $[0,1]$), but the same algorithm works, and the same analysis yields the same rate (up to a factor of $d$) if market prices are in $[0,d]$. Thus, our core theoretical insights remain intact up to minor technical adjustments if this assumption is lifted. **Assumption 1.2: $V_t, W_t \in [0,1]$** For $V_t = m_t + \xi_t$ and $W_t = m_t +\zeta_t$ to be bounded in $[0,1]$ for all $t$, the noise random variables $\xi_t$ and $\zeta_t$ need to be bounded in $[- m_t , 1 - m_t]$. A sufficient condition for this to happen is to assume that $m_t \in [a,b]$, with $0<a<b<1$. This is simply stating that the value of the assets traded is never 0 (in real life, if the asset is, e.g., some stock of a company, then $m_t = 0$ only if the company goes bankrupt, in which case, of course, people would not trade the stock) or 1 (which is equally as natural since the fact that prices are normalized in $[0,1]$ means that $1$ represents an upper bound on the largest amount of money that traders would spend to exchange an asset). Note that this boundedness condition does not conflict with the zero-mean assumption but simply ensures that valuations are always interpretable as prices within the normalized range $[0,1]$. If, in light of this discussion, the reviewer agrees that the assumptions we made (Assumptions 1.1 and 1.2) do not significantly restrict the general applicability of our theory, we'd kindly ask them if they would consider adjusting their score accordingly. We remain available for clarifications in case we missed something or if something needs further explanation.
null
null
null
null
null
null
Learning Classifiers That Induce Markets
Accept (poster)
Summary: This paper considers a standard binary strategic classification problem with a twist: the costs of manipulating features are endogenized, i.e., determined by a market. For example, college applicants could improve their SAT scores by paying for an SAT prep course, but the cost of the course is determined by market prices. The theoretical and empirical results focus on linear classifiers, linear costs for improving features and the utility of the decision-makers is their classification accuracy. They provide an algorithm for computing market prices for feature improvements given demand for features, and formulate a differentiable proxy objective for learning a classifier. Then they explore the range of outcomes that can occur using simulations, including a simulation calibrated to the Adult dataset. ## Update after rebuttal: I still view the setting and results as interesting. The clarity and questions of the other reviewers does not make me doubt this. I will keep my scores. Claims And Evidence: Overall, the idea of endogenizing manipulation costs is an interesting one, and I found the questions answered in the paper to be interesting. Several aspects of the model formulation (mostly simplifying assumptions) should have been more clearly justified. For example, the use of linear classifiers: Either (1) Linear classifiers are commonly deployed in the motivating examples the authors consider. (2) Linear classifiers make the problem tractable and the insights generated are interesting. In either case, beyond explaining why the authors made this assumption, I would have liked to see the analysis extended to other settings (even if just via simulations) to see how the insights change under different analysis choices. I have similar questions about the cost for manipulating features. Similarly, there was no analysis or comparison of the surrogate loss function or why it is worth including, beyond the property that it is differentiable and so amenable to gradient-based methods. There was no comparison between solutions obtained by optimizing for the surrogate loss and the global minimum, either theoretically or empirically. One of the main qualitative claims in the paper is that in the market setting, most individuals are able to attain the positive classification. This is surprising, and makes me doubt how well the model is describing real-world strategic classification contexts. In applications of interest, do we see this behavior? If not, my guess is that this result comes from the zero capacity constraints and zero production costs, and don’t fit well with real-world contexts of interest. It would be interesting to consider relaxations of these assumptions, and the authors should further justify why they decided to build in these assumptions and why they are illuminating. Methods And Evaluation Criteria: The theoretical results were limited (see the lack of evaluation of the surrogate loss above), and the empirical examples mostly relied on toy examples. I found the simulations to be fairly comprehensive and illuminating about the different possible outcomes. I really appreciated the construction of a classifier that is completely orthogonal to the feature that would be relevant in a non-strategic setting. Generally, the takeaway that strategic classification may lead to classifiers that behave totally differently than naive ones is interesting and worth further exploration. I also found the observations that if budgets correlate with distance to the decision boundary, it is possible to set classification thresholds so as to get high accuracy to be interesting in the simulations. In future work, I would like to see this proved formally, rather than in simulations. I didn’t understand the connection between Theorem 2 and the fact that almost all points cross, and there wasn't enough discussion for me to follow it. Theoretical Claims: Proof of theorem 1 looks good to me. I didn't check the ones in the appendix. The techniques used mainly involve solving linear programs and analyzing the simple game induced between the decision-maker and decision subjects. I didn't find the theoretical results very compelling, especially since they strongly depend on the linearity assumptions and structure of price setting on the part of sellers. Experimental Designs Or Analyses: The simulations were very interesting, well-motivated and yielded interesting conclusions! Supplementary Material: N/A Relation To Broader Scientific Literature: This paper contributes a novel model of strategic classification (where costs to change features are endogenous). To my knowledge, this is novel. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The writing was clear and the problem is well-motivated. I wish the authors had spent more time on the observations in section 5 since these were more counterintuitive and interesting than sections 3 and 4. Other Comments Or Suggestions: N/A Questions For Authors: Why is the surrogate loss good? Does it have good properties? Did you check performance of optimizing against the surrogate loss against brute force solutions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your encouraging review and insightful questions. > Simplifying assumptions such as linear classifiers and linear costs Our choice to focus on a simple setup stems from several considerations. Indeed, one consideration is tractability (of both pricing and learning problems). Another consideration is simplicity as a guiding principle: as a first step to exploring market-inducing classification, we believe our choices are reasonable. Note also that almost all works on strategic classification focus on linear models and simple costs (e.g., linear, 2-norm). Since our formalism layers on an additional market mechanism, preserving this structure seemed useful. Finally, we hope you agree that despite our simplifying assumptions – the phenomena that arise from the market mechanism are sufficiently interesting, and even surprising, to merit simplifying assumptions. That said – we certainly agree that more elaborate settings are worth pursuing in future work. > No analysis or comparison of the surrogate loss function or why it is worth including This is a great suggestion! **We will gladly add a comparison of the 0-1 loss vs. our m-hinge loss** for settings where computing the 0-1 loss is tractable. In fact, we already have such results for some settings; for example, in Fig. 4 (which shows 0-1 accuracy), the hinge loss can be shown to replace “flat” regions with linear slopes – as can be expected. This suggests that the hinge serves as a useful proxy when slopes push the model towards “lower” flat regions. We can add this to the figure (it was actually removed due to clutter), and include further analysis on other examples in the Appendix. > In the market setting, most individuals are able to attain the positive classification We think we understand your concern, but this claim is not entirely precise. Looking at the results in Fig. 2, it is true that most points end up being able to cross. Fig. 3 suggests that this effect reduces when budgets are more diverse. But both examples consider a single “cluster” of points; once there are more clusters, then it is no longer the case that all or most points will move. The more general phenomena we believe is at play is that *clusters move together*, i.e., the price setter will tend to be an extreme point of *some* cluster. This can be seen for example in Fig. 4 (bottom right): note how changing the budget ratio (y-axis) causes the price setter to jump from being the extreme point of one cluster to that of another. Given your input, we think it will be valuable to add further empirical investigation of this phenomena. We already have some initial results on this and will gladly add them, either in the Appendix or using the extra page of the final version. > How well [does] the model describe real-world strategic classification contexts? This is a good question which we feel summarizes well many of the previous comments. Real world market dynamics are very likely more complex than our model posits, especially if the market forms in response to a classifier. We believe our model, albeit its simplicity, is still able to capture (at least coarsely) certain effects of transitioning from fixed costs to those of an induced market. But of course this is speculation, and a definitive answer requires much further investigation and research efforts. In regards to production capacity and costs, these are certainly a natural next step to consider. One reason we focused on no constraints or costs is that this significantly reduced the number of free parameters required to specify the setup. Another subtler point is that it isn’t immediate how constraints and costs operate when transitioning from a finite sample (as in training) to expected outcomes (on which we aspire to evaluate). Having no constraints and costs makes it possible to have a single well-defined market mechanism that captures both and supports the notion of generalization. > I didn’t understand the connection between Theorem 2 and the fact that almost all points cross Thm. 2 states that, for the considered distributions, (i) there is a unique price setter, and (ii) it lies beyond the peak of uf(u). Intuitively, since this point is typically more extreme than the peak of f(u) itself, the price setter will be positioned at an extreme quantile. We will make this clearer in the next revision.
Summary: This paper extends strategic classification to a setting where users seeking positive predictions can purchase features from sellers, leading to the formation of a competitive market. The authors analyze how users respond to prices, how market prices adjust based on demand, and how classifiers influence these dynamics. The authors propose an efficient algorithm for computing prices and introduce a learning framework that takes into account the market effects of a classifier. The authors also demonstrate how the market-aware strategic learning framework performs empirically on real data with simulated market behavior. Claims And Evidence: Yes. The claims are well motivated, theoretically sound and supported with experiments. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense Theoretical Claims: Yes (Generally went through the statements but didn't check too much in detail) Experimental Designs Or Analyses: Yes. They seem fine but would've preferred to validate on multiple datasets instead of just adult income. Supplementary Material: No Relation To Broader Scientific Literature: This paper extends strategic classification and combines it with topics from the markets and learning literature. I found the direction exciting and quite novel. Further empirical validation on real-world economic datasets would strengthen the claim and relevance. Essential References Not Discussed: References are adequate. Other Strengths And Weaknesses: The paper is generally well-written with novel ideas that are theoretically sound, supported by a algorithms for computing prices and learning market-aware classifiers and decent validation through empirical experiments. A potential weakness is in the simplifying assumptions (such as linearity in cost modeling etc). Another weakness is that the evaluation is quite limited to a single real-world dataset (Adult Income). Other Comments Or Suggestions: Minor typos here and there. Eqn 8, typo in constraint Ln 213: Aglorithm Questions For Authors: Refer above to comment on datasets/ experiments. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Response: Thank you for your positive review! We were happy to hear that you found our paper exciting and novel. If you have any further questions we would be glad to discuss. > Would've preferred to validate on multiple datasets We are happy to report that **we have extended our experimental section to include an additional dataset**. For details please see our response to Rev. 8pCZ. > Simplifying assumptions such as linearity in cost modeling etc. We agree that supporting more complex costs and models would have been a nice addition. However, as a first step, we think that it is reasonable to focus on a linear construction – especially given that most works on strategic classification consider also linear classifiers and simple cost functions (e.g., linear, 2-norm, or squared).
Summary: The paper studies strategic classification in settings where the cost function for modifying inputs depends on the chosen classifier, via the market this classifier induces. In particular, the chosen classifier determines which features are more "important" for positive decisions and therefore affects the demands for each feature, thus also impacting the market equilibrium. The authors propose a natural mathematical framework for this problem, extending the classic strategic classification model. Then, they derive the equilibrium prices for the case of linear classification and propose an algorithm for computing the empirical equilibrium prices. They propose a learning algorithm for their problem and evaluate it on the adult dataset, comparing it to existing methods. The also market adaptation to classifiers in several simple settings, providing further insights into their model. Claims And Evidence: The paper proposes a new framework for strategic classification, motivated by the observation that prices for features may depend on the classifier itself. I find the problem interesting and the proposed model natural and relevant. Overall, I think that the authors derive an algorithm and conduct an evaluation which are reasonable. That said, several aspects of the work can be strengthened, to make the contribution more convincing. In particular, the authors propose a surrogate hinge-like loss for their problem. However, in Figure 6 the proposed method underperforms compared to a standard strategic method. This brings the question of whether another loss surrogate can perform better - perhaps an ablation study for the role of the hinge loss can be helpful here? Similarly, experiments on other datasets, e.g. Folktables or some synthetic example like in Section 5 can help to bring further evidence for the empirical effectiveness of the proposed algorithm. Methods And Evaluation Criteria: See above. At a few places, the notation and assumptions remain a bit unclear to me - please see below at "Other Comments Or Suggestions" for a few specific suggestions and requests for clarifications. Theoretical Claims: The derived theoretical results are interesting and interpretable. In Section 2, the authors claim that the prices are positive. However, in Section 3 we see that the equilibrium prices are proportional to the linear weights. Am I missing something, or can the weights, in general, be both positive and/or negative? How is that compatible with the prices being constrained to be non-negative? Experimental Designs Or Analyses: See above. Supplementary Material: I skimmed through the supplementary material, which provides helpful further details and the proofs of all claims. Relation To Broader Scientific Literature: The conceptual contribution of the paper is clear and extends the framework of strategic classification in a meaningful way. The paper of Chen et al. (2024) is mentioned in the related work, however it's unclear to what extend their model and/or techniques are relevant here. Perhaps the authors can elaborate? Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: - In equation 4, I guess delta also depends on b? - The authors refer to the "demand set" before equation (5). It will be good to give a formal definition of this concept, or at least refer to some relevant source. - At a few places, the authors refer to the linear models parameters as w and a, rather that w and tau - it will be nice to sync that throughout the text. - In equation 6, Delta is defined as the amount of feature change, rather than the new resulting data point. However, in equation 7, it seems to be used as the resulting data point? - Before equation 10, the authors state that they assume that "sellers have foresight". Could you please provide a technical statement for this assumption? Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your review and comments. We were glad to hear you see our paper as making a clear conceptual contribution – this was indeed our primary aim and focus. Your review mentions that you believe our results can be strengthened, in particular by considering (i) alternative proxy losses and (ii) more datasets. In regards to (ii), and as per your suggestion, **we have extended our empirical results to include an additional dataset based on Folkstable**. Results here confirm most of our previous findings and provide additional insights. Regarding (i), we address this and all other points below. > The proposed method underperforms compared to a standard strategic method This is true, but only for small budget scales ($\le 8$). Note the original data has scale $2^{10}$ (star marker), for which our approach clearly outperforms the baselines and by a large margin. Note also that the x-axis is in logarithmic scale: our method is better in the range $[16,1024]$. In terms of results, we consider this phenomena an interesting finding. In contrast to our approach (MASC) which anticipates prices that *adapt* to the learned classifier (at equilibrium), the standard approach (strat) assumes fixed prices. Our interpretation of the results is that when the distribution of budgets approaches uniform, price adaptation becomes either mild or inconsequential. In this regime, the “price” we pay to enable differentiability turns out to be larger than the benefits of accounting for price equilibration. > Perhaps another loss surrogate can perform better? This is certainly possible, and we would love for future work to develop better solutions. Note however that designing proxy losses for strategic learning tasks can be quite challenging. Even for standard strategic classification, fundamental concepts such as margins can break completely (see Levanon & Rosenfeld (2022)). The only existing approaches that we are aware of and that apply to our setting are the s-hinge (which we build on) and Hardt et al. (2016), which underlies our baseline. For our market setting, even the connection to the s-hinge is not straightforward, as it is intended for the 2-norm cost – not linear, and the generic extension of the s-hinge to other costs is generally intractable. > Experiments on other datasets We are happy to report that our experimental section has been extended to include another dataset based on Folktables (thanks for this suggestion!). The target variable is employment status, and budgets derive from income. Results show overall similar trends to Adult, but with some distinctions. All methods improve as inequality increases. Our method outperforms all others – here at all scales $\ge 4$. The % of crosses, welfare, and social burden behave similarly to Adult. Interestingly, and in contrast to Adult, here strat *underperforms* across all scales, even when compared to naive. We will add these results to the final version. > Can the weights, in general, be both positive and/or negative? Although possible, our approach does not explicitly constrain weights to be positive. This is in line with the results of Hardt et al. (2016) for non-adaptive linear costs. We do however expect them to be such for the learned classifier (otherwise, users would be paying to decrease feature values). > Vs. Chen et al. (2024): This recent paper is similar to ours in that it generalizes strategic classification to support dependencies across user response through the cost function. The main difference is that in their setup, dependencies are encoded explicitly as externalities in the cost function, which is fixed and predetermined. This means that, as in standard strategic classification setup, learning requires knowledge of the particular cost function. In contrast, our work models dependencies as forming indirectly through the market mechanism; the cost function itself is adaptive, and learning only requires knowledge of how the market operates. Another distinction is that externalities depend on the classifier only indirectly: explicitly, they depend on how other points move. Market costs depend on demand rather than on actions; given prices, actions become independent. These distinctions make it hard to draw direct connections between the results and methods of our work and theirs. > Minors - Eq. (4): $\delta$ is defined as the amount purchased. This implicitly depends on b, which constrains how much *can* be purchased. - Demand set: Eq. (5) is the definition. Demand set in general is a common construct. - a vs. $\tau$: Thanks – we will correct this. - $\Delta$ in Eq. (6): This is true! Thank you for noticing. The fix is to add $x+$ before the argmax. - Sellers have foresight: See e.g. “Noncooperative Collusion under Imperfect Price Information” (Green & Porter, 1984). ---- **Thank you again for your feedback. We hope our response and improvements meet your expectations, and kindly ask you to consider increasing your score.** ---- --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. The folktable results look promising and I believe they should be included in the text, together with their clarifications. My main concern that remains is about the prices being positive/negative. In Section 2, the authors explicitly define the prices as non-negative values. However, in Section 3 onward, nothing in the proposed method seems to constraint the weights from becoming non-negative. Since the equilibrium prices are proven to be proportional to the weights (Proposition 1), this also implies that the prices can be negative. In the rebuttal the authors state that they do expect the weights (and prices) to be non-negative for the learned classifier. Is this actually the case in the experiments? This seems hard to ensure, as not all features in benchmark datasets will be positively correlated with the predicted outcome. Even from a theory standpoint, if all variables are (positively) predictive of a positive outcome, they may be correlated and an optimal classifier may assign negative weights to some perhaps? I will be grateful if the authors can elaborate more on this issue. Additionally, I believe that further details (and citations) on the notions of "demand set" and "seller foresight" should be provided, as I expect many readers of ICML papers to not be familiar with these terms. --- Reply to Comment 1.1.1: Comment: Thank you for your response! We will make sure to include the Folktables results in the paper, and properly define demand sets and seller foresight. Thank you again for both suggestions. As for prices – we will gladly elaborate here further. The answer is somewhat nuanced, so please allow us to clarify (and apologies in advance for the lengthy response). First, regarding experimental results: indeed the majority of our experiments resulted in classifiers with positive weights. Those that did not had only a few negative entries, and with small absolute values. In addition, we reran all experiments with an additional constraint enforcing $w \ge 0$ (implemented using projected gradient descent). Results for our method are virtually unchanged (up to noise from randomization). Second, and nonetheless, we agree that our method as currently presented does not explicitly enforce positive weights. One (easy) solution would be to add this constraint to the setup. This is certainly possible – and we would happily consider this if you believe it would make things clearer. But at the same time, we would like to stress that our method and results are sound even without this assumption. The reason is that even if some weights are negative, there still exists a price vector $p’ \ge 0$ such that (i) $p’$ is an equilibrium price, and (ii) outcomes are the same under $p=w$ and $p’$. This is because items are exchangeable: the price a single point x pays for crossing is the same regardless of which features are eventually bought. Hence, there exist many equilibrium prices (which is common in markets with exchangeable items). Our method makes use of the particular choice of $p=w$ since this enables us to adapt the s-hinge to our purposes (*). But as long as not all weights are negative, the market remains well-defined, because even if the particular equilibrium $p=w$ is not feasible, other $p’ \ge 0$ are. Similarly, our method remains effective because the set of points that move is the same under any equilibrium price (whether negative or positive), and so outcomes (and hence the loss and accuracy) are also the same. The proof for this is simple: it shows that regardless of which features are “bought”, demand remains similarly proportional, and the price setter is invariant to this choice. We will add the formal claim and a full proof to the Appendix. Finally, we note that it is generally possible to work with prices $p=w$ in which $w$ has negative entries if we interpret these as meaning that users need to “pay to get less” of something. For example, if a feature encodes weight, then we can pay the gym to reduce it; if a feature encodes the size of a house, then change in any direction is costly. This extends beyond the setting we present in the paper, but is still a feasible interpretation. Technically it requires constraining that $x^h$ remains positive, but this would not change our current results. We thought it would be clearer to focus on positive prices (despite the loss of generality). But if you feel otherwise, then we can certainly consider this alternative. (\*) The s-hinge itself is designed for L2 costs, and is inappropriate for fixed linear costs. Our construction works because, for our choice of $p^*$, points move “as if” towards the decision boundary – as they do under L2 costs. This made it possible to adapt the s-hinge to become our m-hinge, which works for linear *market* costs since they adapt to the classifier. It would have been equally possible to work with non-negative equilibrium prices, but this requires an additional normalizing constant, which we preferred to sidestep.
Summary: The authors propose a market-based perspective in strategic classification and challenge the key assumption that cost functions do not depend on the classifier and are fixed. The paper builds on the premise that classifiers, when used in the real world, incur demand for their features, especially when they lead to a desired prediction. To this end, the authors present a proof of concept using linear classifiers and conduct an empirical study on the `adult` dataset to demonstrate how classifiers impact markets and vice versa. Claims And Evidence: > In standard strategic classification, a useful strategy that exploits this idea is to 'raise the bar'... (line 363) Is there citation for this? Methods And Evaluation Criteria: No issues here. Theoretical Claims: I didn't find major issues with the correctness of the paper's theoretical results. However, there were some incorrect claims that might have been a result of typos. See Questions. Experimental Designs Or Analyses: The experimental design itself doesn't have major flaws. Perhaps one could complain about the lack of other datasets, but as long as there are meaningful results and analysis, the number of datasets is not a big issue. However, this paper's empirical results analysis is woefully insufficient. The results section merely describes what we see in Figure 6 and does not derive any meaning from it. I think there is a missed opportunity with the *burden* metric; when I read section 5.3 and Figure 3, I immediately thought of the cost of such classifiers for those with $y = 1$ -- there is quite a bit to discuss here. Supplementary Material: I've skimmed over the supplementary material which includes additional plots from Section 3-5, theoretical results and an algorithm for differentiable market prices. Relation To Broader Scientific Literature: See other strengths and weaknesses. Essential References Not Discussed: I couldn't think of essential references that the authors did not mention. Other Strengths And Weaknesses: Three major weaknesses: 1. **Exposition** Although the technical details are not widely complex (which is fine), the paper is hard to follow. For example, in the introduction, there is an attempt to summarize the contributions of the work towards the end, but the takeaways are not clear. > we show that markets can give rise to complex behavioral patterns that differ significantly from the conventional model of strategic classification (lines 85-88). What are the behavioral patterns in conventional strategic classification? I think this is implied in the sentences that follow, but stating it explicitly and contrasting the findings would serve the paper better. 2. **Figures/Plots** Another major issue (related to 1.) is how the plots are explained (or not). It takes a very long time to parse the figures (especially figures 2, 3, and 4). The captions are not descriptive. For example, I believe the revenue curves are drawn from a large sample size (since they are smooth), but I can't find any mention of that in the caption or the text. This was especially confusing since figure 1 shows how revenue curves are piecewise linear functions. Considering how figures are supposed to aid understanding, this is a major concern. 3. **Contribution** Lastly, I am not convinced that this is a valuable contribution to strategic classification. This is perhaps an artifact of the lack of analysis in the results section, but I am unsure of the value of considering a classifier-dependent cost function through the paper's mechanism. If the performance is dependent on budgets, isn't the classifier really classifying based on the budget? Furthermore, consider the last couple of sentences in the introduction: > classifiers will be accurate under the markets if they induce only if they associate positive predictions with high budgets. This raises natural questions regarding fairness and socioeconomic equity. This yields two questions: - How does this algorithm perform when budgets are not correlated with the outcome? - Do we want to build such classifiers? Or are the authors suggesting that this is an accurate description of reality? Other Comments Or Suggestions: - line 296: should be right-skew? - line 303: $b_\textrm{max} = 2^{-\alpha}$: isn't this smaller than $b_\textrm{min} = 1$ for $\alpha > 0$ ? - Figure 5: would appreciate if there could be a better way to indicate $\Delta_{h}(x)$? Its very hard to differentiate the points. - If the authors want to use the term "negative" outcome, they may want to switch to $y\in \\{-1,1\\}$. There are some typos regarding classes (e.g., eq (8), line 193) Questions For Authors: - Why the term "inequity" (vs. let's say inequality) in figure 3? > setting $w=\boldsymbol{p}$ (line 411) - Can the authors reason in detail how this draws on the idea of the main algorithm in [1]? [1] Hardt, Moritz, et al. "Strategic classification." 2016. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your careful reading and important comments. Overall, it seems that your concerns are: (i) clarity of exposition and captions, (ii) discussion of the empirical findings in Sec. 6, (iii) the use of a single dataset, and (iv) contribution. For (i) and (ii), **we believe these are easily fixable**, especially given the extra page for the final version. For (iii), **we have added another dataset**, please see our response to Rev. 8pCZ. For (iv), **we believe our response below will help establish the contrary** by clarifying the role of budgets. In this case, we kindly hope you would be willing to reconsider your evaluation. > Empirical results analysis is insufficient Thank you for pointing this out. In hindsight we agree that a more thorough discussion of the results in Sec. 6 would be beneficial – we will gladly expand the discussion here. As you note, there are many potential insights to discuss within the existing results. Please note however that our empirical analysis spans both Sec. 5 (synthetic data) *and* Sec. 6 (real data). We hope you agree that our discussion in Sec. 5 is adequately comprehensive (as other reviewers note), and that together they provide a useful picture. > Exposition As per your suggestion, **we will add to the introduction a succinct summary of the paper’s contributions as bullet points**. We will also make the distinction from standard strategic classification clear and explicit. On a more general note: Our paper lies at the intersection of two disciplines. Presenting our setup and motivation in a way that is equally readable to both audiences is not an easy task. We have given this much thought, but it is possible that the current version leans more towards one than the other (given that other reviewers were happy with clarity). If there are any particular clarity issues you feel are present, please let us know and we will improve them. > Figures/plots Thank you for this input. We will gladly make use of the extra page to clarify and further elaborate. > If the performance is dependent on budgets, isn't the classifier really classifying based on the budget? **Generally, no.** Budgets are certainly important, and can play a significant role in determining learning outcomes, but are not the only component that matters. If this were the case, then learning itself would become degenerate. Our results indicate that this is not what happens. The relation between budgets and labels affects accuracy through the market mechanism – **but the market itself depends on the classifier**. Since the classifier h(x) does not take budgets as input, the relations between features x and labels y is what determines the space of *possible* markets. Learning *can* come to rely on features for which budget disparities are dominant (if exist), but it is equally plausible that learning will come to *avoid* such markets. We have included in the appendix a simple 2D example which illustrates this behavior. In the construction, b correlates with y in the direction of x1, but the optimal classifier uses only x2, under which the market is stagnant. > Last sentences in intro: Please note that these should be read in the context of the preceding sentence – not as a general independent claim. > How does this algorithm perform when budgets are not correlated with the outcome? This is a good question. In general, standard (linear) classifiers work well when the labels correlate with distances from the decision boundary. In strategic market settings, budgets can improve (or degrade) this correlation–which is why they can be helpful. Thus, what matters is the relation between budgets and labels **along the direction of the classifier** (or more precisely, the distances on its negative side). This depends on the classifier and is not an a-priori property of the data. This is also why learning should make use of features (see our point above). If budgets and labels are completely independent, then the choice of classifier should have little effect on the market. But the more likely scenario is that in some directions features will be more informative, and in others, budgets. An optimal solution is likely one that exploits the combination of these two different sources of information through how they form a market. > Do we want to build such classifiers? Or [is this] an accurate description of reality? The question of what we should “want” is of course multifaceted and depends on context. What we believe our results convey is that: (a) if we learn naively (in a way that does not account for the market), then market forces will obscure our performance, whereas (b) if we learn in a way that accounts for the market, then this can improve performance, but possibly by exploiting financial inequalities. > Minors: line 363: See Hardt et al. (2016). line 296: Yes, we will correct this. line 303: bmin and bmax have been swapped. Fig 5: We will add arrows for a subset of points. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. **Concerns raised by other reviewers** I know some reviewers have comments on the somewhat simple setting (i.e., restricting to a linear model, assuming $\delta_ \geq 0$). I understand that the authors did this to make theoretical analysis more tractable; I don't believe it is a major issue that impacts the contribution of the paper -- especially considering the authors are presenting a novel concept **Budgets** Having said that, my question/concern regarding budgets remain. I realized that my comment on whether the classifier is discriminating on the budget was not very clear. From what I understand: - Learning procedure considers $x^h = \Delta_{h}^{\mathrm{market}}(x;b)$ (equation 13) - Hence for a fixed classifier, my intuition is that the "market induced" $x^h$ depends on the budget $b$ - As a result, the learned classifier $h$ has internalized the correlation between budgets and labels (if it is useful for prediction, that is) The authors seem to acknowledge in both the paper and the rebuttal that performance (with strategic agents) depends on how budgets are distributed across labels. For example, if individuals with positive labels have large budgets, this allows the learned classifier to outperform standard linear models. This is because the budgets allow the data post-strategic response to be more separable. With this in mind, I am confused as to why the authors say: > what matters is the relation between budgets and labels **along the direction of the classifier** > Learning _can_ come to rely on features for which budget disparities are dominant I was under the assumption that budgets were universal (i.e., $b \in \mathbb{R}_+$ shared among all features). Still, it seems like the utility of this concept (which I admit is interesting) relies on the distribution of budgets. **Minor Additional Questions** - Is $h_{\mathrm{market}}$ an output of the algorithm? --- # Updated Rebuttal Response I thank the authors for their response and for clarifying the relationship between budgets, labels, and $x^h$. > Since h is a function of x alone, a classifier cannot generally “internalize” the relation between b and y because this has to go through x. Perhaps "internalize" was not the best word choice, but >"only directions which induce markets where _moving_ correlates with labels can be helpful for accuracy" seems to suggest that an accurate model will use (which, I believe is the main point of the method) the relationship between the budget and label; this is because budgets determine (jointly) which points move. Which begs the question: **How does standard strategic classification perform in the example setting above? (i.e., when $b$ and $y$ are not correlated)** -- my intuition tells me that it might perform comparably. Furthermore, I want to ask the authors on how one should implement budgets in real-life applications; would it be disposable income (or bounded by it)? I agree that how cost functions are defined in current strategic classification literature can be unrealistic. While I commend the work for challenging the assumption that cost functions are predetermined and bringing it into discussion, I want to highlight that costs are often not uniform. Given an action (i.e., to purchase a feature) may have different costs for different individuals. I believe this is one of the reasons why modeling cost functions is very difficult. Nonetheless, I appreciate the authors' response and have raised my score. --- Reply to Comment 1.1.1: Comment: Thank you for your follow-up! We appreciate your willingness to further discuss these points with us. We hope our following response will help clarify the remaining issues. > Budgets Indeed, induced markets depend on the distribution of budgets (to us, this is what makes them interesting!). But the *direction* induced by w is crucial to determine *how* (and if) budgets affect outcomes. We are not entirely sure what remains unclear, but please allow us to try and shed light on some points that may be helpful: 1. While best responses $x^h$ do depend on budgets, please note that each $x_i^h$ depends on *all* budgets $b_1,\dots,b_m$, not only on its own $b_i$. This is implicit in the dependence on $p^h$ which is a function of the set $\\{(u_i,b_i)\\}$. It is therefore not true that if b correlates with y, points with larger b will simply “move further” and therefore improve accuracy. This is since the market coordinates movements across users, i.e., the $x_i^h$ are all dependent. 2. One way to see the role of directionality is through the observation that prices do not depend on budgets directly, but rather, on how they “morph” demand u – i.e., the directional distances to h; see the definition of units-per-budget $\bar{u}$ in line 215 (right) and their usage in step 5 in Algo. 1. This morphing can either be helpful (if it disentangles labels) or obstructive (if it mixes them). Both can happen even if b correlates with y. 3. Since h is a function of x alone, a classifier cannot generally “internalize” the relation between b and y because this has to go through x. The relation between x and b affects which points *move* (jointly, through the market); the relation between x and y affects accuracy. Hence, by conditioning on x, only directions which induce markets where *moving* correlates with labels can be helpful for accuracy. **Example.** To illustrate further, consider a simple 2D example (similar to that of Fig. 5) where x1,x2 are independent, and y is a function of x1. Now fix b to be an increasing function of x2. Note this means that **b and y are *not* correlated**, since (i) y depends only on x1, (ii) b depends only on x2, and (iii) x1 and x2 are independent. * Consider a classifier h2 that uses only x2. Since x2 correlates with b, the market will cause some points to cross (pending on the choice of threshold). But since x2 is uninformative of y, such movements are entirely unhelpful for accuracy; that is, points with y=0 and points with y=1 generally “move” together, and so **the market is unable to separate points by label**. The optimal h2 (threshold at ~1) attains accuracy of 0.63. * Now consider a classifier h1 that uses only x1. Since b is unrelated to x1, all points have the same p(b)=p(b|x1). Intuitively, this means that **the market is unable to separate points by their budgets**, as each interval of x1 has the same mix of budgets (which are spread uniformly along x1). Luckily, x1 *is* informative of y. The optimal h1 thresholds at ~0.5 (i.e., between the Gaussians), and only the portion of the negative points with higher budgets cross. Accuray here is 0.83 – and is optimal. Note that these results are the opposite of the original example in Fig. 5. This is precisely because in the above example b correlates with features, not labels. The example in Fig. 5 shows how when b and y start out correlated, the optimal classifier can “decorrelate” them. Our example here shows the opposite: the optimal classifier is completely unable to (nor needs to) discern between low-budget and high-budget users. **High-level rationale.** The standard strategic classification setting considers costs that are fixed and uniform across users. A primary goal of our work was to extend beyond this. The first step was to introduce a market mechanism: this makes costs adapt to the choice of classifier. But our results show that markets under uniform costs are typically “extreme” in that either none or (almost) all users move. This motivated allowing for individual budgets to vary, which we think is a plausible modeling choice. It is true that users with bigger budgets can generally “move further”; this is innate in the construction. But the fact that strategic learning tends to produce classifiers that discriminate on the basis of budgets, as our results suggest, is an emergent phenomena – not an immediate implication of the setup. Markets coordinate the behavior of many users in a non-trivial manner and introduce complex dependencies in who moves and who doesn’t. In our minds, the fact that learning can exacerbate budget inequalities *through* the market mechanism is an interesting finding. > Is $h_{market}$ an output of the algorithm? Yes, of the learning algorithm that accounts for strategic market behavior.
null
null
null
null
null
null
A Unified Comparative Study with Generalized Conformity Scores for Multi-Output Conformal Regression
Accept (poster)
Summary: The paper addresses the problem of applying conformal prediction to multiple continuous output prediction systems. The challenge is obtaining a small average region with a reasonable computation complexity. The paper presents a unified view of the current methods and proposes two novel approaches. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: Yes Supplementary Material: yes Relation To Broader Scientific Literature: The main contribution is a unified treatment of previously proposed methods. Essential References Not Discussed: There is a closely related research direction of multiple testing in parallel in conformal prediction and family-wise error rate control. Popular methods are Bonferroni and Bonferroni-Holm corrections of the rectangular shaped region. Other Strengths And Weaknesses: The topic is important, and this paper is well-written and provides an up-to-date comparative review of current methods. It can be a good review paper for people interested in the topic. After comparing many methods it is not clear what the conclusion is. The conclusion section is short and non-informative. The presentation could be improved. The notation is sometimes too dense, making it challenging to follow. The introduction could provide a clearer motivation for the need for multi-output conformal methods. Other Comments Or Suggestions: no Questions For Authors: How is it related to family-wise error rate control methods? Popular methods are Bonferroni and Bonferroni-Holm corrections of the rectangular. I was surprised that a review paper didn't even mention this line of research. While the tabular datasets are informative, additional experiments on real-world high-dimensional or unstructured data (e.g., images, text) would better demonstrate the practical impact of the proposed methods. For the methods DR-CP, C-HDR, and others, It is not clear how you know the density f(x|y) from the data. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your feedback and suggestions for improvement. **Missing relation to family-wise error rate (FWER) control methods:** Thank you for raising this point. We do briefly mention multiplicity control approaches in Appendix A (citing Timans et al., 2024 [59]). FWER methods (like Bonferroni or Holm corrections on univariate CPs) are indeed a related direction for constructing multivariate regions, typically by taking Cartesian products of univariate regions $\hat{R}\_i$. Their strength lies in leveraging well-understood univariate CP. However, our work focuses on methods that explicitly model the joint distribution $F\_{Y|X}$ (using e.g., multivariate NFs, DRFs, GMMs, or generative sampling) to construct potentially non-rectangular regions that can better capture output dependencies and often result in smaller region sizes (as seen empirically). While FWER methods are valuable, especially when only marginal predictions are available, the methods we survey and propose aim for potentially tighter regions by directly modeling the multivariate structure. We have revised Appendix A to better clarify this distinction. **Conclusion unclear/short:** We agree that the conclusion was too brief and lacked practical guidance. We have expanded it to provide clear, actionable takeaways based on our unified study and experiments, summarizing the trade-offs: * **If only univariate margin models are available/desired:** Use M-CP or CopulaCPTS (fast, but hyperrectangular; may poorly capture dependencies). * **To minimize *average* region size (potentially sacrificing conditional coverage):** Use DR-CP (if density $\hat{f}$ available), else STDQR (if invertible latent model Q available), else PCP (if only samples $\hat{Y}$ available). * **To achieve good *conditional coverage* and small *median* size:** Use C-HDR (if density $\hat{f}$ available), else L-CP (if invertible Q available), else C-PCP (if only samples $\hat{Y}$ available). * **For good conditional coverage with lowest *computational cost* among ACC methods:** Use L-CP (if invertible Q available). **Questions** 1. *Relation to FWER:* Please see our response above regarding the distinction (modeling joint vs. combining univariate CPs, region shapes, and dependency handling). To better understand the prediction regions produced by FWER control methods, we also provide a qualitative and quantitative comparison with Bonferroni correction on our dataset from Figure 1 in [this linked PDF](https://pdfhost.io/v/gqCAunzaRU_rebuttal-jr27). The method is fast computationally but produces larger regions due to the rectangular shape, similarly to M-CP. 2. *Need high-dimensional/unstructured data experiments:* We agree on the importance of high-dimensional settings. To this end, we included an experiment on CIFAR-10 image generation (Appendix I), where the output is a 3x32x32 image ($d$=3072). We used a conditional Glow model [34, 54]. The results (Table 3) largely confirm our findings from tabular data: C-PCP, L-CP, and C-HDR demonstrate superior conditional coverage (WSC, CEC) compared to other methods even in this high-dimensional setting. We agree this experiment deserves more visibility and will ensure it is also mentioned in the introduction. 3. *How is the density $f(y|x)$ known?* Crucially, we do not assume the true density $f_{Y|X}$ is known. All density-based methods (DR-CP, C-HDR, HD-PCP) rely on an estimate $\hat{f}(y|x)$ learned from data. In our experiments, $\hat{f}$ is provided by models like normalizing flows (MQF², Glow), DRF+KDE, or GMMs (detailed in Appendix F.2). Conformal prediction takes this potentially imperfect estimate $\hat{f}$ and calibrates the resulting prediction regions (using the calibration set) to provide valid finite-sample marginal coverage guarantees, regardless of how accurate $\hat{f}$ is. Methods like C-HDR, L-CP, and C-PCP additionally provide asymptotic conditional coverage guarantees as long as the estimator $\hat{f}$ (or sampler $\hat{Y}$, or latent map $\hat{Q}$) converges to the true data generating process.
Summary: This paper provides an overview of multi-output conformal regression methods, putting them in a unified setting, and propose two new approaches based on scores CDF, that generalizes some previous methods, as well as a latent-based approach, that generalizes other families of approaches. ## update after rebuttal I thank the authors for their replies. Overall the paper was quite enjoyable to me, and while as raised by other reviewers there are certainly ways to get further results, I do believe it makes a nice contribution with respect to the current literature on mutli-output conformal regression. Claims And Evidence: Yes, the claims made about the new method, notably in terms of conditional coverage, are supported both by theoretical proofs (appendices E.2., that seem correct as far as I could tell with the time I could spend on it), as well as by extensive experiments (appendix F, notably). Methods And Evaluation Criteria: The methods and evaluations criteria are classical metrics and arguments from conformal prediction papers dealing with multi-output regression, as well as some new unbiased metrics regarding the region size (discussed in appendix F). Overall the used methods appear to be sound. Theoretical Claims: I checked the claims as best as I could, and they appear correct to me. However, given the average size of ICML submissions (say, 30 pages with supplementaries included) and the number of reviews required by the confrence, my checks were not as thorough as they would have been for, say, a journal paper. Experimental Designs Or Analyses: I check the experimental designs, and all appeared sound to me. Graphs and results appear to be in-line with expectations. Supplementary Material: I scanned all the supplementaries, both regarding theoretical arguments and experiments. I could not detect major flaws within them. Relation To Broader Scientific Literature: Yes, in the sense that providing reliable multi-output regression regions is a generally interesting problem. Essential References Not Discussed: Most references I could think of were present in the paper, that appears to be very well educated in the field. Other Strengths And Weaknesses: Overall, this is a quite enjoyable paper that wraps up many method of the literature and provides in addition some interesting new ones. I see no major weaknesses in the paper. My only regret is that this probably should have been a journal paper rather than a 6 pages paper with 30 pages of supplementaries, but I guess this is how things are done nowadays. Other Comments Or Suggestions: No specific comment, except that authors should double check references (some do not compile as expected, e.g., [37] and [61]) Questions For Authors: No specific questions to the authors, this is a nice work overall. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thorough, positive review and your support for acceptance. We appreciate you finding the paper enjoyable and the methods sound. We will correct the reference issues you kindly pointed out in the final version.
Summary: This paper performs a unified comparative study of existing conformal methods with different multivariate base models for constructing multivariate prediction regions. It generalizes two classes of conformity scores from the univariate to the multivariate case. Moreover, it conducts large-scale experiments comparing the different multi-output conformal methods. Claims And Evidence: Yes, the multivariate conformity scores introduced ensure asymptotic conditional coverage while maintaining exact finite-sample marginal coverage. The proofs are provided. Methods And Evaluation Criteria: Yes, the comparative study and the large-scale experiments are evaluated on a wide range of conformal methods and datasets. Theoretical Claims: I did not go through the proofs for the theoretical claims. Experimental Designs Or Analyses: I read the descriptions of the experiments in the paper and did not see major issues. Supplementary Material: I went through the supplementary material but not in detail. Relation To Broader Scientific Literature: This paper generalizes two commonly used classes of conformity scores to the multivariate case. This is an important contribution as the classes of multivariate conformity scores are underexplored in the literature. Moreover, this paper provides an extensive comparison of existing conformal methods. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: Strengths: - The paper is well-written - This work studies multivariate conformal prediction, which is relatively underexplored. This can provide a valuable direction for future research. - Two multivariate conformity scores, which generalizes existing univariate conformity scores, are introduced. Theoretical analyses are provided. - This work gives a detailed and unified comparative study for existing multivariate conformal models. - The experiments cover a wide range of methods - The methods are implemented using a unified code base, ensuring fairness Weaknesses: - The proposed scores seem not to have finite-sample conditional guarantees Other Comments Or Suggestions: This work performs an extensive study on multivariate conformal methods and introduces two multivariate conformity scores. This paves the way for a research direction which is relatively underexplored. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback and recognizing the value of our work. **Proposed scores seem not to have finite-sample conditional guarantees:** You are correct. Our focus is on developing flexible conformal methods applicable to complex, modern generative models (density, sample, or latent-based) with minimal assumptions on the underlying data distribution or the base predictor's accuracy. Under such weak, distribution-free assumptions, achieving exact finite-sample conditional coverage is generally impossible, as proven by Barber et al. (2019) [5]. Instead, we provide asymptotic conditional coverage (ACC) guarantees (Appendix E.2) for C-PCP, L-CP, and C-HDR. This means that as the base model converges to the true distribution and the calibration set size increases, these methods provably achieve the desired conditional coverage level. Our experiments (Fig 3) further empirically validate that these methods also achieve superior empirical conditional coverage in finite samples compared to methods lacking ACC guarantees.
Summary: The paper considers conformal prediction for high-dimensional regression. While one can extend uni-dimensional regression algorithms to multi-dimensional ones, other algorithms that explicitly work in $\geq 1$ dimensions also exist. This paper introduces two conformity scores, C-PCP and L-CP, exploring their connection to existing methods. Further, the paper experimentally compares different algorithms on various synthetic and real-world datasets. Claims And Evidence: This paper is a comparative study. Methods And Evaluation Criteria: The paper uses a variety of synthetic and real-world datasets. Furthermore, the paper compares multiple algorithms for its study. The paper mentions that CP$^{2}$-PCP is similar to C-PCP (lines 195-197, column 1). What is the reason for not including it? Theoretical Claims: I checked the correctness of Proposition 1. Experimental Designs Or Analyses: The paper evaluates the algorithms on marginal coverage, conditional coverage, set sizes, and computational time. A few questions/suggestions: 1. Replacing Fig. 5 with a table containing mean and median set sizes would help. The actual values convey more information than relative rankings. 2. Is there a quantitative analysis for Section 5.1? 3. Figs. 7 and 8 are difficult to compare. 4. Tables 3, 4, and 5 should bold the statistically significant values (accounting for the standard deviations). Supplementary Material: I briefly reviewed the Appendix. Relation To Broader Scientific Literature: The paper compares different conformal prediction methods for high-dimensional regression. It also introduces 2 new conformity scores. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. The paper considers conformal prediction for high-dimensional regression, which is of practical importance. 2. The paper compares different conformity scores on various datasets. This comparison includes empirical results and some theoretical links. Weaknesses: 1. The novelty seems limited to the proposed conformity scores C-PCP and L-CP. 2. It is unclear if and when the proposed conformity scores perform better than the existing ones. C-HDR seems to perform the best. Other Comments Or Suggestions: 1. The existing methods that C-PCP and L-CP build on should be discussed in detail in the main paper. For example, details of directional quantile regression [Feldman et al., 2023]. 2. The non-conformity score need not be deterministic (lines 83-84, column 2). As long as the training data is independent of the calibration and test data, marginal coverage is guaranteed. 3. The conformity score by Sadinle et al. [2016] is for classification. Even though one can use it for regression, one should explicitly mention that. 4. The citations need to be corrected. For example, Sadinle et al. [2016] was in the Journal of the American Statistical Association in 2019. Typo: 1. "...$D_{cal} \rightarrow \infty$..." $\rightarrow$ "...$|D_{cal}| \rightarrow \infty$..." (line 162, column 2) Questions For Authors: 1. What are the challenges with extending conformal prediction to high-dimensional regression (Section 4)? 2. What causes the absence of contours in Fig. 2 (lines 260-264, column 1)? The method should still construct a set. 3. I am confused about the claim of the introduction of new classes of conformity scores (lines 124-126, column 2). Don't these classes already exist? The paper also provides existing methods for both classes. 4. Is there a practical takeaway from Proposition 1? 5. Do C-PCP and L-CP use Euclidean distance or a different distance? 6. Does the analysis change if the setup switches to exchangeable data (Section 2)? 7. What do the acronyms C-PCP and L-CP stand for? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. **The paper mentions that CP²-PCP is similar to C-PCP. What is the reason for not including it?** CP²-PCP was proposed concurrently with our submission period and published very recently (ICLR 2025). Hence, we were unable to include it in our empirical comparisons. However, recognizing its relevance, we have added an in-depth discussion in Appendix H comparing CP²-PCP with our CDF-based scores (including C-PCP) and highlighting the similarities and differences in their approaches to achieving conditional validity. **The novelty seems limited to the proposed conformity scores C-PCP and L-CP** In addition to the proposed conformity scores C-PCP and L-CP, this work is the first (to our knowledge) to categorize systematically, implement (in a unified codebase for fairness), and empirically compare a broad range of multi-output conformal methods (marginal, density, sample, latent-based) within a single framework. This synthesis reveals crucial trade-offs (Table 1), motivates our proposed methods (C-PCP/L-CP each address specific gaps), and establishes theoretical connections, which were previously unexplored. **Unclear if/when proposed scores perform better; C-HDR seems best** C-HDR indeed performs very well, especially for region size when density estimation is accurate and feasible. However, C-PCP and L-CP offer distinct practical advantages in specific scenarios: * **C-PCP:** Requires **no explicit density estimation or likelihood evaluation**, only samples from a generative model. This makes it applicable to models like diffusion models or GANs where densities might be intractable or unavailable, situations where C-HDR cannot be used. * **L-CP:** Achieves asymptotic conditional coverage (ACC) with **significantly lower computational cost** (Figure 6, often >100x faster) compared to C-HDR and C-PCP, as it avoids per-instance sampling or complex density calculations. Both C-PCP and L-CP achieve asymptotic conditional coverage (ACC), unlike DR-CP/PCP/HD-PCP/STDQR/M-CP/CopulaCPTS. Their region sizes are competitive, generally only surpassed by C-HDR (if applicable) and DR-CP (which lacks ACC guarantees). Table 1 and the Conclusion summarize these trade-offs. **Suggestions** Thank you, these are excellent suggestions for improving clarity. We have implemented them as described in [this linked PDF](https://pdfhost.io/v/VUnLttqXdJ_rebuttal) and will incorporate other comments in the final version. **Questions:** 1. *Challenges in high-dim CP:* Primarily, the lack of a natural ordering (unlike 1D) makes simple extensions of univariate methods (like CQR directly) difficult. This motivates methods based on joint density (DR-CP, C-HDR), samples (PCP, C-PCP), or latent spaces (L-CP, STDQR) to capture complex dependencies and define multivariate regions. 2. *DR-CP empty contours (Fig 2):* This occurs when, for a given $x$ and a desired coverage level (e.g., $x=1$ and $1-\alpha$ = 0.2), no region in the output space achieves a density value $\hat{f}(y|x)$ high enough to be less than or equal to the constant threshold $-\hat{q}$. This visually demonstrates DR-CP's lack of conditional coverage. Methods with asymptotic conditional coverage (C-HDR, C-PCP, L-CP) adapt their thresholds or regions based on x and avoid this issue asymptotically. 3. *Claim of new classes:* We introduce generalizations of existing univariate concepts to the multivariate setting, forming broader classes: - CDF-based: Generalizes the univariate HPD-split/C-HDR score [31] to work with any base multivariate conformity score $s_W$ (Eq. 11), leading specifically to C-PCP when $s_W=s_\text{PCP}$. - Latent-based: Generalizes univariate Distributional CP [13] to multivariate outputs using any invertible conditional generative model Q and any latent distance function $d_\mathcal{Z}$ (Eq. 14), leading to L-CP. Feldman et al. [20] also uses a latent space but performs CP on grid samples mapped to the *output* space, not directly in the latent space like L-CP. 4. *Practical takeaway from Prop 1:* An interesting practical takeaway follows from the fact that DR-CP and C-HDR are linked in the same way as PCP and C-PCP. Since DR-CP can be shown to asymptotically have the smallest average region size while C-HDR empirically has a smaller median region size, similar observations are expected for PCP and C-PCP. This is verified empirically: PCP has a smaller average region size across all base predictors, while C-PCP has a smaller median region size. We have added this insight to Section 5.3. 5. *Distance for C-PCP/L-CP:* In our experiments, PCP (and thus C-PCP) uses Euclidean distance, and L-CP uses the Euclidean norm in the latent space. Both could be generalized to other metrics. 6. *i.i.d. vs exchangeable data:* Thank you for pointing this out, the analysis remains valid. We will relax this assumption. 7. *Acronyms:* C-PCP = CDF-based Probabilistic Conformal Prediction. L-CP = Latent-based Conformal Prediction.
Summary: This paper reviews latest developments of conformal prediction methods in multi-output regression tasks. Claims And Evidence: Yes. The paper did a comprehensive overview and detailed analysis of various methods, categorized them into different variants, and compared the results both numerically and visually. Methods And Evaluation Criteria: Yes. The paper used popular datasets and common evaluation criteria. Theoretical Claims: The theoretical claims are well supported by the proof in appendices. Experimental Designs Or Analyses: The experiments are well designed. Supplementary Material: All appendices were reviewed. Relation To Broader Scientific Literature: The work is important and provides an overview of the multi-output conformal regression methods. It can lead to future developments in this field. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strength: 1. The paper is a comprehensive review of an important topic in conformal prediction. Weakness: 1. Many CP methods rely on conditional density estimation techniques and can integrate with various approaches. Were the experiments conducted to compare the performance of CP methods under different conditional density estimation variants to ensure a comprehensive evaluation? 2. Specifically, while the original ST-DQR paper used conditional VAE, this paper employs conditional normalizing flows. Did the authors perform a comparison between these two approaches? 3. More broadly, for a fair comparison, CP methods should be evaluated based on their optimal performance when paired with different conditional density estimation methods. Were such evaluations conducted to ensure fairness and completeness? Other Comments Or Suggestions: No further suggestions. Questions For Authors: NA Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive evaluation and constructive feedback. We address your questions below. **Comparison of CP under different conditional density estimation (CDE) variants:** We agree that evaluating CP methods across different CDEs is important for robustness. We performed extensive experiments (detailed in Appendix G) using a diverse set of CDE models: two normalizing flows (MQF² [33], conditional Glow [54]), Distributional Random Forests [12] (with KDE), and a GMM hypernetwork [23, 8]. Our key findings regarding the relative performance and properties of the compared conformal methods remained stable across these different base predictors, demonstrating the robustness of our conclusions. **Comparison between our normalizing flow approach and the VAE approach of STDQR** Our primary goal was a fair comparison among conformal methods. To achieve this, we standardized the base predictor architecture where appropriate. For STDQR [20], we replaced the original CVAE with a conditional normalizing flow (specifically, MQF² [33]). This ensures that comparisons between STDQR, L-CP, and other methods are not confounded by differences in the underlying generative model architecture (VAE vs. Flow). This adaptation was also motivated by the exact invertibility and direct density evaluation offered by flows, eliminating the noise associated with VAE sampling and the need for directional quantile regression in the latent space, as discussed in Appendix F.3 and recommended by Feldman et al. [20]. While a direct empirical comparison between the two models is outside the scope of this comparative study, our approach ensures a more direct comparison of the conformalization strategies themselves. **Pairing between CP methods and conditional density estimation methods** This is a valid point. Ideally, each CP method could be paired with its optimal CDE. However, for a unified comparative study, this would introduce confounding factors, making it difficult to isolate the performance differences attributable to the CP methods themselves versus the CDE pairings. Our approach was to select a set of strong, representative CDEs (as listed above) and apply them consistently across all applicable CP methods. This ensures a fair comparison of the conformal methods within our unified framework. Appendix G shows that the relative rankings and conclusions are largely consistent across these CDEs, suggesting that our findings are not overly sensitive to a specific CDE choice within this representative set.
null
null
null
null
Beyond Cropped Regions: New Benchmark and Corresponding Baseline for Chinese Scene Text Retrieval in Diverse Layouts
Accept (poster)
Summary: The paper addresses Chinese scene text retrieval challenges, focusing on the complex layouts of Chinese text in real-world scenes. Current approaches that adapt English text retrieval methods to Chinese contexts show limited performance. The authors introduce DL-CSVTR, a benchmark for evaluating Chinese text retrieval across diverse layouts including vertical, cross-line, and partial alignments --- addressing limitations in existing datasets that primarily feature horizontal text. They also propose CSTR-CLIP, a novel model that moves beyond cropped text regions by employing a two-stage training approach. A key innovation is the Random Alignment Granularity Processing module that improves perception of text elements both within and around text regions. Experiments show CSTR-CLIP outperforms previous methods on both existing benchmarks and the new DL-CSVTR benchmark, particularly for challenging text arrangements. Claims And Evidence: The paper presents convincing evidence for some claims but falls short in others. The performance improvements of CSTR-CLIP are well-documented in Tables 1 and 2, and the ablation studies in Table 3 effectively demonstrate component contributions. I find the construction and validation of the DL-CSVTR benchmark problematic. The paper mentions three annotators collected the data but provides minimal details on annotation protocols or quality assurance. For a benchmark paper, this is a significant weakness. Were there inter-annotator agreement measurements? What specific criteria determined layout categories? The dataset size (2,070 images) seems modest for a benchmark intended to evaluate real-world performance. The causal analysis linking model components to performance gains is weak. For instance, when discussing the RAGP module, the paper shows it helps in cross-line and partial layouts (Table 3, rows 5 vs 6), but doesn't adequately explain the mechanism. Why does random granularity alignment specifically help with these layouts? The visualization in Figure 6 is interesting but doesn't fully connect to quantitative results. The paper lacks error analysis - when does CSTR-CLIP fail and why? The performance on individual queries varies dramatically (visible in Figures 12-14 in the supplementary material), but this variation isn't analyzed in the main text. Without understanding failure modes, it's difficult to fully assess the model's robustness. Methods And Evaluation Criteria: The proposed methods align well with the unique challenges of Chinese scene text retrieval. Moving beyond cropped regions is particularly sensible, as Chinese characters often appear in complex spatial arrangements that traditional bounding box approaches can't handle effectively. The two-stage training process reflects a thoughtful consideration of what's actually needed: both OCR capabilities and layout understanding. I'm less convinced about some aspects of the evaluation framework. While creating a dedicated benchmark for diverse layouts addresses a real gap, the construction feels somewhat ad hoc. The authors manually collected images for different layout categories, but provided little justification for why these specific 89 query terms were chosen or how well they represent real-world retrieval scenarios. A more systematic approach might have started with an analysis of query distributions from actual user data. The benchmark's size (around 2,000 images) strikes me as minimal. For comparison, standard object detection benchmarks typically include tens of thousands of images. This raises questions about whether performance gains would generalize to larger, more diverse datasets. One methodological strength is the comparison with both CLIP baselines and prior specialized approaches. This shows the contribution beyond just leveraging a strong pretrained model. The speed-accuracy tradeoffs are also reasonably explored, though real-world deployability considerations could have been discussed more thoroughly. In summary, while the methods are well-matched to the problem, the evaluation criteria would benefit from more rigorous benchmark construction and validation. Theoretical Claims: The paper is primarily empirical in nature and does not present formal mathematical proofs or theoretical guarantees that require verification. The contributions are algorithmic and experimental rather than theoretical. The authors do make some informal claims about why their approach works, particularly regarding the limitations of cropped-region paradigms and the benefits of multi-granularity alignment, but these are supported through ablation studies and experimental results rather than formal proofs. The paper's technical foundation largely builds on the existing CLIP architecture with modifications specific to the Chinese scene text retrieval task. The authors explain their algorithmic contributions (like the Random Alignment Granularity Processing module) but don't provide theoretical convergence guarantees or complexity analyses that would typically accompany theoretical papers. Given the applied nature of the work, the absence of formal proofs is not necessarily a weakness. However, a stronger theoretical analysis of why multi-granularity alignment specifically addresses the challenges of diverse Chinese text layouts could have strengthened the paper's contributions beyond empirical results. Experimental Designs Or Analyses: I examined several aspects of the experimental design and found some methodological issues: The DL-CSVTR benchmark construction lacks statistical rigor. The authors use three annotators, but don't report inter-annotator agreement or formalized criteria for categorizing layouts. This raises questions about the benchmark's reliability and reproducibility. For evaluation metrics, they rely solely on mean Average Precision (mAP), which is standard but insufficient alone. Given their focus on diverse text layouts, layout-specific metrics capturing the unique challenges of vertical or cross-line text would strengthen their analysis. The ablation study (Table 3) is reasonably designed to isolate component contributions, but lacks error bars or significance testing. With performance differences sometimes being modest (like 88.41% vs 88.57% on CSVTR), it's hard to assess whether improvements are meaningful or statistical noise. The visualization analysis (Figure 6) offers qualitative insights but feels cherry-picked. A more systematic visualization approach across different query types would better support claims about the model's perceptual abilities. The baseline implementations warrant scrutiny. The authors replace backbones in previous methods with CLIP for "fair comparison," but this modification fundamentally changes those methods. I question whether these are still valid representations of the original approaches or essentially new hybrid models. Supplementary Material: I examined most of the supplementary material, focusing particularly on: The dataset analysis section, which provided better insight into the actual distribution of text layouts in CSVTR (92.62% horizontal) that justified the need for their new benchmark. The additional algorithm details were useful, especially Algorithm 3 which clarified how the Random Alignment Granularity Processing actually works in practice --- something not fully explained in the main paper. The per-query AP comparison figures (12-15) revealed substantial performance variability across different query terms that wasn't apparent from the aggregated mAP scores alone. This suggests the model improvements might be query-dependent rather than universally effective. I also checked the implementation details for baseline reproduction. Their approach of replacing backbones with CLIP seemed reasonable but raised questions about whether these modified baselines truly represent the original methods. The visualizations in Figure 11 provided some qualitative evidence for their claims, but I would've preferred more systematic failure case analysis rather than cherry-picked examples. Relation To Broader Scientific Literature: The paper makes two significant contributions that connect to several research threads: The DL-CSVTR benchmark extends previous scene text retrieval datasets (like IIIT and CSVTR), addressing their notable bias toward horizontal text layouts. While prior benchmarks established evaluation frameworks, they failed to capture layout diversity in real Chinese text. CSTR-CLIP relates to recent advances in guided attention for vision-language models. Similar to MaskCLIP and Red-Circle's approach to regional attention, it directs CLIP's focus while preserving global context. However, its full-image approach represents a departure from the dominant cropped-region paradigm established in Mishra's 2013 work and continued through Gomez (2018) and Wang (2021). The multi-granularity alignment concept evolved from cross-modal embedding work by Luo and Wen, but introduces flexibility previous approaches lacked. A notable gap is limited engagement with transformer-based OCR systems that have shown promise for complex layouts. By positioning their work primarily within retrieval rather than OCR literature, the authors somewhat constrain their conceptual framework. Essential References Not Discussed: Based on my knowledge, I did not identify any major omissions in the related work. Other Strengths And Weaknesses: ### Strengths: 1. The paper addresses a practical problem in Chinese scene text retrieval that differs significantly from English text retrieval challenges. The authors effectively identify limitations in applying English-centric methods to Chinese texts with diverse layouts. 2. Their technical approach cleverly combines CLIP's visual understanding with layout-specific guidance. Rather than merely cropping text regions, they preserve global context while directing attention to text areas. 3. The two-stage training process shows careful consideration of the problem's unique requirements. The first stage focuses on core OCR abilities while the second enables handling complex layouts. ### Weaknesses: 1. The experimental analysis lacks depth in connecting model behavior to performance across different layout types. While ablation studies show component contributions, they don't sufficiently explain why certain components help specific layouts. 2. The performance differences shown in supplementary figures (12-15) reveal considerable variation across query types that deserves more thorough analysis. This would provide more meaningful insights than the aggregate metrics alone. 3. Several technical descriptions need clarification, particularly for the Random Alignment Granularity Processing. The algorithm is central to their approach but described in abstract terms without concrete examples of how it transforms inputs. 4. The paper would benefit from more direct comparison with recent transformer-based approaches that handle complex layouts, situating their work in the broader context of both retrieval and OCR research. Other Comments Or Suggestions: 1. Section 4 describes $Conv _ {Textpos}$ but doesn't specify its architecture details (kernel size, etc.). 2. The explanation of hyperparameters α, β, and θ in RAGP (page 5) lacks justification for chosen values. How sensitive is performance to these settings? 3. The discussion of inference speed (FPS) in results sections mentions improvements but doesn't analyze computational complexity of different components. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comment, and we would like to clarify these questions according to subjects. **Question about DL-CSVTR datasets** 1. Claims And Evidence's Para 2 We ensured that the process involved three annotators, with one main annotator overseeing the data quality and consistency. The main annotator was responsible for validating the layout classification, image quality, and privacy considerations of the data submitted by the other two annotators. Additionally, the data provided by the main annotator was shared with the other annotators for reference. After annotating six query words, all three annotators would engage in discussions to identify and resolve any inconsistencies, thus ensuring alignment in the annotations. Regarding layout categories, we employed manual quality control to ensure that the visual representation of query words in images strictly adhered to the corresponding layout categories. We ensured that no additional layout formats for the same query word interfered with the classification. 2. Methods And Evaluation Criteria's Para 3 DL-CSVTR refers to the scale of scene text retrieval benchmarks in both Chinese and English scene text retrieval tasks, such as CSVTR, IIITSceneTextRetrieval, and StreetViewText. This approach ensures that the dataset is appropriately aligned with existing benchmarks, making it well-suited for evaluating scene text retrieval performance. 3. Methods And Evaluation Criteria's Para 2 The 89 query words, sourced from real street scenes including trademarks and phrases, were manually selected and clustered by annotators based on layout requirements in large-scale images. Data collection preceded model design, as our prior research identified gaps in handling specific Chinese text layouts. To address this, we created DL-CSVTR using the CSVTR methodology, ensuring the dataset's relevance and authenticity with real-world street view scenes. **Question about CSTR-CLIP model** 1. Weaknesses 1, 3 & Experimental Designs Or Analyses's Para 3 RAGP relaxes the strict constraints of location-based suggestions. After the first training stage, the model matches regions indicated by Textpos Conv. However, layouts like cross-line and partial layouts may only appear partially within the detection region. RAGP mitigates this by adjusting the alignment of the suggestion region with the query, improving performance for layouts that don’t fully align, especially for cross-line and partial layouts. The modest improvement with CSVTR is because RAGP addresses location constraints, which are less relevant for horizontally aligned query words, as shown in Figure 2. We also appreciate the reviewer’s suggestion to include the pseudocode for RAGP. We will provide a more detailed description of the RAGP algorithm in the camera-ready paper. 2. Other Comments Or Suggestions 1 we adopted the same kernel size as the first convolutional layer of CLIP's image data preprocessing to maintain compatibility with subsequent fusion. 3. Other Comments Or Suggestions 2 The variation in retrieval accuracy across different query words is most evident in cross-line and partial layouts, due to differences in the spatial distances between query word parts in cross-line layouts and the varying paragraph lengths in partial layouts. These issues are closely related to RAGP, which guides the model to focus on the visual features around the suggested region. The effectiveness of RAGP is influenced by hyperparameters α, β, and θ, which control the receptive field size. We conducted experiments and visualized bad cases to determine the optimal combination of these parameters during training. **Question about experiment design** 1. Weaknesses 4 We chose GOT as a transformer-based approach, using spotting-related instructions. Specifically, edit distances between a query word and the spotted words from scene images are used for text-based image retrieval. | Method | CSVTR | DL-CSVTR-V | DL-CSVTR-CL | DL-CSVTR-P | |-----|-----|-----|-----|-----| | GOT |86.56| ***84.91*** | 59.47 | 55.78 | | Ours |***88.57***| 84.44 | ***65.56*** | ***61.83*** | 2. Experimental Designs Or Analyses Para 3 Recall is suitable for single-target queries, but in DL-CSVTR, a query word can correspond to multiple targets, making it unsuitable. mAP, which has been used as the sole metric for accuracy in previous scene text retrieval works, is therefore employed in this study. 3. Other Comments Or Suggestions 3 Please refer to Reviewer QR6X's response 1. 4. Weaknesses 2 Errors mainly occur in cross-line layouts with large gaps between query word parts and in partial layouts where the query word occupies a small portion of the string. These issues contribute to bias in errors. RAGP with global feature fusion improved performance, but for challenging samples, further attention to semantically related elements outside the suggested region may be needed. --- Rebuttal Comment 1.1: Comment: The rebuttal provides satisfactory answers to several of my concerns, and the authors seem receptive to feedback. While this is encouraging, I still believe the current version of the paper does not fully meet the threshold for a higher score. Hence, I am keeping my original evaluation.
Summary: This paper focuses on Chinese scene text retrieval, which aims to extend previous English scene text retrieval to Chinese. The authors establish a Diversified Layout benchmark for Chinese Street View Text Retrieval (DL-CSVTR) to assess retrieval performance across different text layouts. They also propose Chinese Scene Text Retrieval CLIP (CSTR-CLIP), a new model integrating global visual information and multi-granularity alignment training. Experiments on existing benchmarks show that CSTR-CLIP achieves an 18.82% accuracy improvement over the previous SOTA model and has a faster inference speed. Analysis of DL-CSVTR validates its superiority in handling diverse text layouts. Claims And Evidence: Yes, confirmed. Methods And Evaluation Criteria: Yes, confirmed. Theoretical Claims: This submission does not involve proof of theory. Experimental Designs Or Analyses: Yes. I have checked all the experimental results, including the comparison with previous SOTA and the ablation of modules of all stages. The experiments are sound and extensive and are superior to previous works. Supplementary Material: I have reviewed the supplementary material, which includes source code and dataset samples. This gives the evidence for reproducibility. Relation To Broader Scientific Literature: Extending text retrieval task to Chinese scenario. It could identify limitations in prior research. Essential References Not Discussed: I hold there are relatively sufficient essential references discussed. Other Strengths And Weaknesses: Strengths: 1) This paper constructs a Chinese scene text retrieval dataset, which includes various layouts, and can support vertical/cross-line/partial text retrieval that English text retrieval seldom encounters. 2) They propose CSTR-CLIP method for Chinese scene text retrieval, which relieves text detection needs and enhances perception flexibility through multi-granularity alignment. 3) Extensive experiments show that the proposed method achieves superior performance on both the previous Chinese scene text retrieval dataset and the proposed DL-CSVTR dataset. Weaknesses: 1) Tab.1, why CSTR-CLIP is much faster than all other methods except CLIP should be explained in detail. 2) Fig.7, what is the application for “Interactive Region-Specified Scene Text Retrieval”? If the user can give the mask, why do they not only input the mask region as a separate image? Other Comments Or Suggestions: Please refer to the above comments. Questions For Authors: Please refer to the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Question 1** Tab.1, why is CSTR-CLIP much faster than all other methods except CLIP? This should be explained in detail. **Response 1** Thank you for the valuable comment. The faster performance of CSTR-CLIP compared to other methods can be attributed to the simplified nature of our approach. Previous methods based on visual embedding techniques such as [1][2], require additional computational effort to crop text regions, and apply model-based style processing to convert those regions into a standardized format for easier matching. Furthermore, these methods also required extra steps to transform the query into an image format that could match the cropped text regions, leading to a significant loss in inference speed. Additionally, earlier approaches based on cross-modal embeddings [3] involved extra costs for designing matching templates and performing region cropping, which further slowed down the inference process. In contrast, CSTR-CLIP benefits from a direct cross-modal matching approach, eliminating the need for extra style transformations, rendering, or template-based matching constraints. This streamlined process allows for much faster inference speed. We will clarify this explanation in the manuscript to provide a deeper understanding of the performance improvements. **Question 2** Fig.7, what is the application for "Interactive Region-Specified Scene Text Retrieval"? If the user can give the mask, why do they not only input the mask region as a separate image? **Response 2** We appreciate your thoughtful question. "Interactive Region-Specified Scene Text Retrieval" is designed to refine the granularity of user searches. When users need to search for images containing specific text within their image library and have some recollection of the region where the text appears, they can provide a region suggestion. By focusing on the region of interest, this enhances the search results, improving accuracy and recall in retrieval tasks. However, since the region specified by the user may not perfectly match, directly cropping the image could lead to information loss and additional computational costs. Thanks to the design and training paradigm of CSTR-CLIP, the model does not limit itself to the given region but also responds to surrounding areas within the suggested region, as illustrated in Figure 6. Therefore, CSTR-CLIP can improve retrieval accuracy and recall by considering the user's region suggestion and query text, all while avoiding information loss. [1]Visual and semantic guided scene text retrieval [2]Visual Matching is Enough for Scene Text Retrieval [3] Scene text retrieval via joint text detection and similarity learning.
Summary: This paper addresses the limitations of existing Chinese scene text retrieval methods, which inherit the solution for English scene text retrieval and fail to achieve satisfactory performance in Chinese scene text retrieval. Therefore, the authors first introduce DL-CSVTR, a new benchmark featuring vertical, cross-line, and partial text layouts for more realistic assessments. Then, they propose CSTR-CLIP. CSTR-CLIP applies a two-stage training process to overcome previous limitations, such as the exclusion of visual features outside the text region and reliance on single-granularity alignment, thereby enabling the model to effectively handle diverse text layouts. Experimental results show that CSTR-CLIP outperforms existing methods significantly on both the standard CSVTR dataset and the new DL-CSVTR benchmark, effectively handling varied text layouts. Claims And Evidence: Yes, I think the claims in this submission are supported by their experiments and discussions. Methods And Evaluation Criteria: Yes, I think the proposed method and evaluation criteria are proper and make sense for the problem. Theoretical Claims: Yes, I have checked the correctness of the proofs for the theoretical claims and found no issues. Experimental Designs Or Analyses: Yes, I reviewed the experimental design and related analyses and found them reasonable and valid. Supplementary Material: Yes, I have reviewed all the supplementary material. Relation To Broader Scientific Literature: This work focuses on Chinese scene text retrieval and proposes complex and variable datasets. Therefore, I think it makes sense for a border impact in the whole scene text retrieval community. Besides the dataset, the author also introduces a method named CSTR-CLIP, which uses full image information with guided attention instead of previous crop-detection processing. I think it is somewhat reasonable and interesting. Essential References Not Discussed: I think this paper adequately cites and discusses the essential related works that are necessary for understanding the context of its key contributions. Other Strengths And Weaknesses: Strengths: - It is nice to see this work proposes a new dataset DL-CSVTR. This dataset is interesting and contains a large number of challenges from real-world scenarios. - Performance of this work is quite well. Especially in those challenge cases such as vertical, cross-line, and partial alignments. Other Comments Or Suggestions: I have no more extra comments. Questions For Authors: 1. In Table 2, we found that CSTR-CLIP has significantly improved performance in various challenge scenarios. However, the basic horizontal cases are not reported together. Can the author provide the provide the corresponding performance results? 2. Although this work is designed for Chinese scene text retrieval. However, all the problems mentioned are also challenges for the English scene text retrieval task. Therefore, can the author provide the performance results of CSTR-CLIP on English scene text retrieval datasets? I think it will be helpful to evaluate its improvement upon recent works on more general datasets. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Question 1** In Table 2, we found that CSTR-CLIP has significantly improved performance in various challenge scenarios. However, the basic horizontal cases are not reported together. Can the author provide the corresponding performance results? **Response 1** We appreciate the reviewer’s comment. We have taken CSVTR as the baseline for horizontal retrieval, as most query text in the dataset (as shown in Figure 2) predominantly follows a horizontal layout. Therefore, the performance of CSTR-CLIP in horizontal retrieval scenarios is essentially reflected in the CSVTR baseline results. For further reference, the results in Table 1 of the paper can provide additional insights into the performance for horizontal retrieval. To elaborate, we manually categorized the visual representations of the query text in the CSVTR dataset based on their corresponding layouts. In cases where a single image contained multiple orientations, such as both horizontal and vertical text, we applied a priority-based classification process. Specifically, if both orientations were fully visible and intact, we followed this priority: cross-line > partial > vertical > horizontal. For instance, if both horizontal and vertical layouts appeared in the same image, we classified the layout as vertical. After conducting this analysis, we found that 92.62% of the data had query words with a horizontal visual representation in the image. Consequently, we have used CSVTR as the baseline for horizontal layout retrieval and did not collect additional horizontal layout data in the DL-CSVTR dataset. **Question 2** Although this work is designed for Chinese scene text retrieval, all the problems mentioned are also challenges for the English scene text retrieval task. Can the author provide the performance results of CSTR-CLIP on English scene text retrieval datasets? I think it will be helpful to evaluate its improvement upon recent works on more general datasets. **Response 2** Thank you for raising this important point. We agree that the challenges discussed in our work are indeed also relevant to English scene text retrieval. In response, we trained CSTR-CLIP using the en-clip model with the same processing approach and evaluated its performance on English scene text retrieval datasets. Regarding the data, we followed the methodology used in SynthText-900K for the English corpus and generated a dataset of 300k images, matching the scale of the pre-trained data used for the Chinese scene text retrieval. For real-world data in the second stage, we used the English subset of the MLT dataset as the training set, aligning it with previous work in English scene text retrieval. In terms of model design, we retained the two-stage framework of CSTR-CLIP, replacing the cn-clip with en-clip as the initialization model. We chose the IIIT Scene Text Retrieval dataset as the benchmark for English scene text retrieval, using mean Average Precision (mAP) as the evaluation metric to compare our method's performance in English scene text retrieval. The results are shown in the table below: | **Method** | **mAP** | |-------------------------------|---------| | TDSL | 77.09 | | VSTR | 77.40 | | Luo et al | 82.15 | | Ours (Stage 1) | 83.17 | | Ours (Stage 2) | ***86.75*** | CSTR-CLIP also demonstrated excellent performance in the English scene text retrieval context. However, we observed that cross-line and partial layouts are less common in the English scene. This was evident in the visual representation of query words in the IIIT Scene Text Retrieval dataset, which showed a similar trend. We believe the key to performance improvement lies in the retention of full-image information and the guidance provided by region suggestions. This is particularly important because many images in the IIIT dataset have lower clarity, and previous crop-based methods struggle to effectively retrieve text from such images. The second-stage improvement likely comes from fine-tuning on real-world data. Therefore, the "Beyond Cropped Regions" paradigm is indeed beneficial for English scene text retrieval as well. We also plan to open-source CSTR-CLIP for English scene text retrieval in the near future. --- Rebuttal Comment 1.1: Comment: Thanks for the response from the author. I have no more questions about it. It is a well-organized paper. The advantage of this work is that the motivation is straightforward and easy to follow. The method is reasonable and highly related to its motivation. The disadvantage of this work is that the proposed method is somewhat simple. Overall, despite some minor flaws, the strengths of this paper outweigh its weaknesses, and I am inclined to maintain my weak accept recommendation.
Summary: In this paper, the authors aim to solve the problem of Chinese scene text retrieval in complex and diverse layouts. They first establish the DL-CSVTR benchmark including vertical, cross-line and partial alignments. In addition, the authors propose CSTR-CLIP method which integrates global visual information with multi-granularity alignment training. The experiments are conducted on both previous benchmark and the proposed DL-CSVTR demonstrating the proposed method CSTR-CLIP outperforms previous SOTA model. ## update after rebuttal I maintain my score after reading rebuttal Claims And Evidence: The claims made in this paper include the specific characteristics of Chinese scene text retrieval, which is evident since Chinese is different from English obviously. Methods And Evaluation Criteria: They are reasonable because Chinese has more vertical textlines than English, and cross-line and partial retrieval are useful because Chinese textline is composed of characters while English textline is composed of words. Theoretical Claims: It seems there is no proof for theoretical claim involved. Experimental Designs Or Analyses: The performance on CSVTR benchmark, DL-CSVTR benchmark and ablation study are all checked, and the results sound solid. Supplementary Material: Reviewed but not detailed check. Relation To Broader Scientific Literature: t pushes the general CLIP model to extensive text retrieval task with sophisticated design. Essential References Not Discussed: NO Other Strengths And Weaknesses: Strengths: (1) In the paper, the authors find that Chinese scene text retrieval is different from English in that Chinese text has many layouts, so they do not only transfer English methods to Chinese scenario but design new retrieval tasks including vertical, cross-line and partial retrieval where English retrieval seldom involves. This is significant since Chinese text together with other similar texts are also used all over this world. (2) The proposed CSTR-CLIP method further pushes the CLIP into Chinese scene text retrieval task, verifying that CLIP has the potential in this task though it is the by-product. The CSTR-CLIP performs well on Chinese benchmarks. (3) The experiments are enough to verify the effectiveness of the proposed method, and many extra experimental results are supplied in the appendix material. Weaknesses: (1) In figure 2, we can conclude that Horizontal text occupies 92.62%. I is not clear about how is the oriented text classified? To horizontal or vertical? (2) Other than vertical, cross-line and partial text retrieval, is there any other type of retrieval task that has not been well addressed? Other Comments Or Suggestions: see the weakness part. Questions For Authors: see the weakness part. Ethical Review Concerns: NO Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Question 1** In Figure 2, we can conclude that horizontal text occupies 92.62%. It is not clear how the oriented text is classified – is it classified as horizontal or vertical? **Response 1** We appreciate the reviewer’s observation. To clarify, we classified the visual representation of query text in the CSVTR dataset based on manual verification of their layout in the images. Specifically, when both horizontal and vertical text appear in the same image, we categorize the query text based on the priority of visibility and completeness. The priority order for classification is as follows: cross-line > partial > vertical > horizontal. In other words, if both horizontal and vertical text are present in the same image, the text would be classified as vertical. This approach ensures a consistent and logical classification across all images, accounting for different layout variations. **Question 2** Other than vertical, cross-line, and partial text retrieval, is there any other type of retrieval task that has not been well addressed? **Response 2** In terms of Chinese text layouts, there is another layout type that we believe requires further attention: dispersed layout. For instance, a query term like "ICML" could appear in an image as "International Conference on Machine Learning", where the characters are spaced out. This scenario represents a potential area of improvement, and we are working on solutions to address this issue. Furthermore, we believe two additional challenges in scene text retrieval should be explored in future work: 1) Retrieval of text with specific attributes, such as layout and color; 2) Retrieval of text in relation to visual elements, for example, "a blue building with the text 'xxx' on it." We see these as promising directions for future advancements in scene text retrieval. We will discuss it in the future work in our camera-ready version.
null
null
null
null
null
null
Towards Rationale-Answer Alignment of LVLMs via Self-Rationale Calibration
Accept (poster)
Summary: The paper targets misalignment between rationales and answers in Large Vision-Language Models (LVLMs), particularly in VQA tasks. It introduces Self-Rationale Calibration (SRC), a framework that iteratively aligns rationales with answers using a combination of rationale fine-tuning, pairwise candidate scoring, and confidence-weighted preference curation. The main contributions include (1) the SRC framework, which improves the factual consistency and reasoning quality of LVLMs, (2) a pairwise scoring strategy with R-Scorer, and (3) extensive experiments demonstrating significant performance improvements across multiple benchmarks Claims And Evidence: No serious flaws found Methods And Evaluation Criteria: See weakness Theoretical Claims: No serious flaws found Experimental Designs Or Analyses: No serious flaws found Supplementary Material: Appendix reviewed Relation To Broader Scientific Literature: See Other Strengths And Weaknesses Essential References Not Discussed: Related works are clear Other Strengths And Weaknesses: The proposed method is reasonable and meaningful, with experimental results demonstrating its effectiveness across multiple datasets. However, the novelty of this work is questionable, as the core concept of Self-Rationale Calibration (SRC) does not introduce fundamentally new learning principles or new findings. Specifically, the key innovation is shifting alignment focus from correctness alone to rationale consistency. Previous methods align answers or vision-text pairs, but SRC aligns the thinking process behind answers. The integration of rationale fine-tuning, pairwise scoring, and preference learning for multimodal alignment is reasonable. However, SRC primarily combines existing techniques (CoT, preference fine-tuning) rather than introducing new theoretical principles. Other Comments Or Suggestions: The paper excessively uses various special text styles, including bold, italics, colored text, etc., which make reading difficult and visually unpleasant. Additionally, the color of in-text citations and section references seem different from the standard template; I am not sure if this leads to formatting issues. Questions For Authors: Could the authors clarify the novelty beyond integrating existing techniques? In other words, is the contribution primarily on the engineering and empirical improvements side? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear reviewer, due to **space limits** of initial rebuttal, we are unable to elaborate on details or minor points, but we would be glad to clarify any further concerns in the next-round reply. --- > **The novelty and contribution of SRC.** We sincerely appreciate the reviewer’s positive feedback regarding the reasonableness and effectiveness of our proposed method. We would also like to respectfully **clarify the novelty and contribution of our work**, specifically addressing the reviewer’s concern regarding the originality of SRC beyond simply integrating existing techniques. **# Motivation and New Findings:** Existing LVLM post-training approaches primarily emphasize aligning outputs based on correctness or vision-text consistency. However, as we observed and illustrated through concrete examples (Figure 2), merely correct answers may result from spurious correlations rather than genuine understanding or reasoning processes. This observation **highlights a critical yet overlooked misalignment**—the disparity between a model’s rationale (its underlying reasoning) and the final answer. Our novel insight lies here: by explicitly calibrating the alignment between rationales and answers, SRC **significantly improves not only factual correctness but also logical consistency and semantic robustness**. We empirically validate this improvement via semantic entropy measurements (Section 4), showcasing clear benefits in scenarios that demand reliable and interpretable multimodal reasoning, such as visual question answering (VQA). **# Novelty and Contributions:** We respectfully emphasize that **SRC is more than merely an engineering integration of existing techniques** (such as CoT or preference fine-tuning). Rather, it represents a novel shift from answer-centric or vision-text alignment to **rationale-centric alignment**. Specifically, SRC differs fundamentally from prior approaches in three key aspects: 1. **Rationale-Oriented Preference Calibration:** Unlike traditional preference-based fine-tuning methods that solely focus on output correctness or vision-text consistency, ***SRC uniquely prioritizes the internal quality of rationales themselves***. It explicitly calibrates rationale-answer consistency, positioning rationale correctness as a core training objective. We further clarify that ***our Rationale Fine-tuning (Section 3.1) is fundamentally different from standard CoT techniques***. Specifically, CoT explicitly promotes step-by-step reasoning via prompts, whereas our rationale fine-tuning intrinsically induces the model to consistently generate rationale-answer pairs (RAPs) spontaneously, without explicit prompting. Additional detailed discussions distinguishing our approach from CoT are provided in Section 2. 2. **Iterative Candidate Calibration via Pairwise Scoring:** While existing methods leverage preference fine-tuning broadly, SRC framework innovatively exploits inherent variability among candidate rationale-answer pairs ***through iterative candidate calibration and a tailored pairwise scoring mechanism***. This strategic design effectively discriminates subtle rationale quality differences, accurately capturing relative superiority—even when different answers appear equally at the correctness level. 3. **Tailored Scoring Model (R-Scorer):** To facilitate efficient, scalable, and effective rationale-answer evaluation, we propose R-Scorer—a lightweight model specifically tailored for pairwise candidate scoring. As demonstrated by experiments and human evaluation, ***R-Scorer substantially outperforms generic LLMs***, even those substantially larger in scale (up to 48×), thereby underscoring the unique advantage of our designed pair-wise scoring strategy. We hope this clarification can addresses the reviewer’s concerns and effectively highlights the originality and contributions of our work. --- > **The style and formatting of the main paper.** We appreciate your feedback on the paper's style and formatting. In the revised version, we will streamline the overall presentation style of our paper. For the citation style, we follow the same format used in previous ICML publications. We sincerely thank the reviewer for your valuable suggestions, which help improve our manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. The author's responses aligned with my initial understanding of the work; thus, I maintained my original overall recommendation.
Summary: This paper proposes Self-Ratationale Calibration, a novel framework to align the rationales and answers and LVLMs. SRC shows consistent improvement on both LLaVA-1.5 and LLaVA-Next on several benchmarks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: NaN Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: NaN Essential References Not Discussed: NaN Other Strengths And Weaknesses: My main concern is about the benchmark results. 1. In Table 1, some results of previous methods are lower than the number from their original paper. For example, RLAIF-V gets 35.4 on MMStar from its original paper, but the authors reported it as 33.7. Similarly, CSR gets 71.1 on LLaVA-Wild, not 64.5. I think this is a serious problem, and the authors should explain the reason for the misalignment in the rebuttal. 2. There are only a few benchmarks reported in the main table. This also hinders a fair judgment about the real performance of the proposed method. 3. A minor point. There are too many details introduced in the method part, a slightly simplified version may be better for readers to capture the main idea of each component. In a word, I think the proposed method is complex and novel. But the experiment results with the above-mentioned problems failed to prove its effectiveness. I will adjust my final score based on the rebuttal and comments from other reviewers. ## update after rebuttal The rebuttal partly solved my concerns. I raise my score to weak accept. Other Comments Or Suggestions: NaN Questions For Authors: NaN Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer, due to **space limits** of initial rebuttal, we are unable to elaborate on details or minor points, but we would be glad to clarify any further concerns in the next-round reply. --- > **The discrepancies in reported benchmark results.** We sincerely appreciate your careful examination of our reported benchmark results (Table 1). We understand your concerns about the discrepancies in the reported performance of previous works, particularly the differences in RLAIF-V and CSR results compared to their original papers. **First**, we would like to clarify **LVLMs are sensitive to deployment environments** (e.g., deployment configurations and inference backends). To ensure fairness and reproducibility, **we consistently evaluated all baseline models using their officially released weights under identical and controlled experimental settings via the VLMEval framework**. **Second**, regarding the specific cases you highlighted: For **RLAIF-V**, while the original MMStar score (35.4) is indeed higher than ours (33.7), we noticed **this discrepancy is not unique to our evaluation**. There are other works, such as [1], that report a **much lower value** of 31.8. For **CSR**, the authors did not release results for LLaVA_Bench (LLaVA-Wild). Additionally, since LLaVA-Wild involves "LLM-as-a-judge" through proprietary GPT-4 API, it is challenging to pinpoint the cause of the observed differences. Notably, **we found significant discrepancies in CSR's original reporting (e.g., SEEDBench) relative to the evaluations of other papers**, such as [2], **whose results closely match ours**: | | Name | MMStar | SEEDBench | | --- | --- | --- | --- | | CSR (official) | LLaVA-1.5-7B | - | ***58.6*** | | Paper [2] | LLaVA-1.5-7B | 32.2 | 65.6 | | Ours | LLaVA-1.5-7B | 32.1 | 64.6 | | CSR (official) | LLaVA-1.5-7B + CSR | - | ***60.3*** | | Paper [2] | LLaVA-1.5-7B + CSR | 32.4 | 65.4 | | Ours | LLaVA-1.5-7B + CSR | 32.7 | 64.4 | To further enhance transparency, we will release our evaluation settings and inference results in the future, allowing the community to replicate our results independently. We hope this explanation helps clarify the situation. [1] A Topic-level Self-Correctional Approach to Mitigate Hallucinations in MLLMs [2] Self-Correction is More than Refinement: A Learning Framework for Visual and Language Reasoning Tasks --- > **More evaluation on various benchmarks.** Thank you for your valuable feedback. We appreciate your concern regarding the limited number of benchmarks reported in the main table. We would like to clarify that **MMStar itself integrates multiple comprehensive benchmarks**, including MMBench, SEEDBench, MathVista, and MMMU, all of which address data leakage issues and adhere to the vision-centric QA principle [3]. As such, MMStar can provides a thorough and holistic evaluation of the model's overall capabilities. In response to your comment, we have conducted **additional evaluations of the following benchmarks**: SEEDBench, AI2D, ScienceQA, and RealworldQA. Please refer to the updated results in the following table: | Name | SEEDBench | AI2D_TEST | SQA | RealworldQA | | --- | --- | --- | --- | --- | | LLaVA-1.5-7B | 64.6 | 51.4 | 66.3 | 53.8 | | + POVID | 64.1 | 50.7 | 66.1 | 54.2 | | + HA-DPO | 63.3 | 50.1 | 65.1 | 53.4 | | + SIMA | 64.5 | 47.7 | 66.1 | 52.8 | | + SeVA | 63.7 | 49.7 | 64.5 | 54.0 | | + RLAIF-V | 64.3 | 51.5 | 63.8 | 50.1 | | + CSR | 64.4 | 51.0 | 65.7 | **54.6** | | **+ Ours** | **67.3** | **55.4** | **68.1** | 53.9 | From these extended evaluations, we observe that **our method consistently outperforms prior methods** across most benchmarks, except for RealworldQA (where results are comparable to them). Overall, these additional results support our claim regarding the broad effectiveness and robustness of our approach. [3] Are we on the right way for evaluating large vision-language models? --- > **Providing a slightly simplified version for methodology.** Thank you for your valuable feedback. We appreciate your suggestion regarding simplifying the method section to enhance readability. In the revised version, we will streamline the descriptions while preserving the key details to ensure clarity for the readers. --- Rebuttal Comment 1.1: Comment: The rebuttal partly solved my concerns. I raise my score to weak accept.
Summary: The paper introduces Self-Rationale Calibration, a framework designed to enhance the alignment between rationales and answers in VLMS. The motivation stems from the observation that LVLMs can generate correct answers but often fail to provide factually grounded rationales, leading to inconsistent reasoning. Generally speaking, SRC calibrates LVLMs by iteratively aligning rationales with answers, improving logical consistency. Additionally, the authors introduce R-Scorer, a lightweight LLM-based evaluator that scores responses based on rationale quality and factual consistency. Finally, experimental results across multiple VQA benchmarks indicate that SRC outperforms existing alignment methods, improving both perception and logical reasoning capabilities. Claims And Evidence: They are supported by qualitative examples and quantitative improvements in fine-grained perception and logical reasoning. Methods And Evaluation Criteria: Yes, the proposed methods and/or evaluation criteria make sense for the problem. Theoretical Claims: The paper assumes that rationale fine-tuning inherently improves answer quality, but it does not explore failure cases or potential biases introduced by the fine-tuning process. Experimental Designs Or Analyses: The experiments compare SRC with state-of-the-art LVLM post-training strategies, including DPO-based methods (e.g., RLAIF-V, CSR, SeVA). Results indicate SRC provides significant improvements in: - Logical reasoning: +8.4% increase (from 39.6 → 43.6 on MMStar) - Fine-grained perception: +8% increase (25.2 → 33.2) - Math reasoning: +9.2% improvement Ablation studies confirm the importance of rationale fine-tuning, scoring, and iterative alignment. One concern is that Iterative fine-tuning with preference alignment is expensive, requiring multiple rounds of scoring and calibration. Supplementary Material: No, I do not review the supplementary material. Relation To Broader Scientific Literature: Related to multimodal grounding approaches. Essential References Not Discussed: n/a Other Strengths And Weaknesses: S: - SRC explicitly aligns rationales with answers via preference fine-tuning, a novel improvement over DPO. - Provides an alternative to costly GPT-4o-based evaluations. W: - SRC involves multiple iterations of fine-tuning and preference scoring. It would be better to discuss the efficiency in detail. Other Comments Or Suggestions: There are some other papers about rationalization, would it be possible to discuss them in related work? [1] Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz Restraint [2] Is the MMI Criterion Necessary for Interpretability? Degenerating Non-causal Features to Plain Noise for Self-Rationalization [3] Breaking Free from MMI: A New Frontier in Rationalization by Probing Input Utilization [4] MGR: Multi-generator Based Rationalization. Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, due to **space limits** of initial rebuttal, we are unable to elaborate on details or minor points, but we would be glad to clarify any further concerns in the next-round reply. --- > **Training efficiency of SRC.** We sincerely appreciate the reviewer's feedback considering the training efficiency of our SRC framework. **First**, we would like to clarify that direct efficiency comparisons across recent post-training methods are challenging, **due to the diversity in methodology and resource requirements**. For instance, methods such as RLAIF-V requires deployment of multiple open-sourced LVLMs; POVID and HA-DPO depend heavily on proprietary models (e.g., GPT-4V) for generating preference data; and CSR, the most comparable method to ours, also employs an iterative post-training strategy. **Second**, regarding the efficiency of our framework, although SRC may not match the efficiency of methods using proprietary LVLMs (API-driven) and single-round preference fine-tuning, e.g., POVID or HA-DPO, **SRC consistently achieves substantial improvements** in perception, reasoning, and generalization across various benchmarks. Moreover, the iterative nature of SRC represents an deliberate design choice aimed at progressively enhancing alignment between rationales and answers. Importantly, as demonstrated in Figure 7, **even a single iteration of SRC yields significant performance improvements** across benchmarks. This highlights that substantial gains can be achieved early in the process, offering a favorable option/trade-off between computational cost and performance enhancement. **Thrid**, to proactively address and mitigate the efficiency concern associated with the iterative process, we **have incorporated several key optimizations** into the SRC framework: 1. **Optimized Candidate Generation**: SRC employs ***sentence-level beam search with constrained search-tree widths*** to generate rationale-answer pair (RAP) candidates. This approach significantly reduces computational overhead compared to exhaustive beam search. Please refer to Appendix A.2 ("Candidate Generation") for further details. 2. **Lightweight Pairwise Scoring Model**: Instead of utilizing generic LLMs, which can be substantially larger (up to approximately 48×), SRC introduces R-Scorer, ***a specialized lightweight scoring model*** tailor for evaluating rationale quality and factual consistency in a pairwise socring manner. Our experiments and human evaluations (Figure 6) demonstrate R-Scorer’s superior balance of efficiency and effectiveness in the scoring process. 3. **Efficient Engineering Implementation:** SRC incorporates the vLLM library with the prefix caching technique for ***accelerating inference during scoring***, further enhancing computational efficiency. In summary, SRC's computation cost is comparable to similar iterative methods, and significant performance benefits can be observed even after a single iteration. The combination of improved candidate generation, a lightweight specialized scoring model, and optimized inference implementation ensures that SRC provides a practical and effective approach for enhancing LVLMs' alignment between rationales and answers. --- > **More discussion of related works.** We thank the reviewer for suggesting the inclusion and discussion of recent rationale-focused works. For [1] and [4], they focus primarily on addressing internal degeneration problems within the rationale generation process itself, while [2] and [3] explore rationale extraction based on the maximum mutual information (MMI) criterion, aiming specifically at mitigating spurious feature reliance ([2]) and probing model input utilization ([3]). Here, we consider [2] and [3] to be more closely aligned with the context of SRC. [2] and [3] target input-level rationalization (selecting input subsets as rationales), whereas the proposed Self-Rationale Calibration (SRC) framework operates distinctly **at the output level**, calibrating **the alignment between generated rationales and answers** within LVLMs. In the revised version, **we will incorporate references [2] and [3] in our paper**, highlighting the methodological differences from SRC and discussing their broader implications for rationale-aware modeling. We greatly appreciate this valuable suggestion and believe such clarifications will further strengthen the presentation of our contributions. [1] Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz Restraint [2] Is the MMI Criterion Necessary for Interpretability? Degenerating Non-causal Features to Plain Noise for Self-Rationalization [3] Breaking Free from MMI: A New Frontier in Rationalization by Probing Input Utilization [4] MGR: Multi-generator Based Rationalization --- Rebuttal Comment 1.1: Comment: Thank you for the reponse and I have raised the score accordingly.
Summary: This paper attempts to address the misalignment between the final answers and the perceptual reasoning, i.e., rationales, from LVLMs' outputs. With a prior fine-tuning for the model to generate rationales, the authors propose a pairwise scoring strategy considering model confidence and LLM-driven assessment, i.e., R-scorer, to identify superior preference pairs for fine-tuning the quality of the rationale-grounded responses. Extensive experiments have validated that the alignment is able to allow overall perceptual improvement across various task domains. ## update after rebuttal Considering other reviewers' comments, I agree that there exists some concerns regarding theoretical novelty. I decide to keep my original recommendation. Claims And Evidence: The main claim that rationale-response alignment improves the model’s perceptual ability, is clearly and comprehensively validated through experiments. Methods And Evaluation Criteria: Yes, the method is novel and appear to be well-aligned with the problem. The evaluated benchmark includes various ability tests for LVLMs. Theoretical Claims: The paper does not involve any theoretical results. Experimental Designs Or Analyses: It seems the Table 1’s Math and S&T results for LLaVA-1.5 uses different $\alpha$ values to achieve their best shown in Fig. 9. Such inconsistency would degrade the credibility of the work. Supplementary Material: Yes. The supplementary material provides more implementation details, experimental settings, additional results for the paper. Relation To Broader Scientific Literature: The work is closely related to LLM-based judge and preference optimization of LVLMs, which are referenced in the main paper. Meanwhile, the data used in this work involves reasoning ideas such as chain-of-thought reasoning. Essential References Not Discussed: Key references are well-discussed. Other Strengths And Weaknesses: Strengths: 1. The problem identified exists in most LVLMs and is indeed a crucial challenge for visual understanding. The proposed method has verified that aligning language reasoning & results can also improve visual perceptual ability of LVLMs, demonstrating great novelty. 2. The light-weight scoring model is effective and efficient in identifying superior preference pairs. 3. The experimental results demonstrate comprehensive improvements over various domains, verifying the advanced visual perception. Weaknesses: 1. The beam search results may have similar confidence scores as they are top log-probability results. With such similarity, I doubt the usability of the confidence scores. In the appendix, the overall distribution visualizations of the preferred & non-preferred responses tell little about whether confidence scores help to distinguish candidates in an individual response group. 2. The scale of x-axis in Fig. 9(a) is weird: [0.0, 0.2, 0.4, 0.6, 0.8, **0.9**, 1.0]. The authors could re-organize the grid intervals to accommodate the 0.9 results. 3. Currently, there are only limited results on LLaVA-Next-8B with no comparison to other methods. Results on architectures other than LLaVA are also limited. 4. Also see the above *Experimental Designs Or Analyses* for my concern about the consistency issues. Other Comments Or Suggestions: 1. The two types of winning scores have the same notation in Eq.1 and Eq. 2. 2. Typos: (a) Line 131 right column: “provide a *rationales* before providing *a* answer”. (b) Line 270-271 right column: “then *sent* them”. Questions For Authors: 1. It’s unclear what the calibration process uses as the “remaining data” stated in Sec. 4.1 “Evaluation”. 2. Why does R-scorer perform better against larger LLMs while having much less parameters? The authors are suggested to include some analysis in Sec. 4.3 “Different Scoring Models”. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer, due to **space limits** of initial rebuttal, we are unable to elaborate on details or minor points, but we would be glad to clarify any further concerns in the next-round reply. --- > **The usability of the confidence scores in Confidence-weighted Winning Score.** While we acknowledge that sentence-level beam search may yield candidates with closely ranked log-probability scores, we would like to clarify the following points: 1. **The confidence score is a complementary role in the candidate selection process.** The core of SRC’s candidate selection lies in the ***pairwise scoring*** judged by the R-Scorer. The introduction of confidence score can be considered as a ***post-processing*** of winning scores against edge cases (e.g., ties between candidates), as discussed in Section 3.3. The results in Figure 9 that the moderate introduction of confidence ($1-\alpha$) benefits SRC performance, also support the effectiveness of this design choice. 2. **SRC adopts a relative confidence advantage in the candidate selection process.** Even when beam candidates have similar confidence scores, their ***relative ranking*** remains informative. Specifically, we transform log-probabilities into rank-based scores (Equation 4) and apply a rank-based weighting scheme (Equation 5). This allows the framework to capture their ordinal confidence advantage, allowing for more nuanced processing of winning scores even when the confidence scores are similar. We hope this addresses your concerns regarding the usability of confidence scores in candidate selection. --- > **Results on LLaVA-Next-8B and more experiments on other LVLM architectures.** We sincerely appreciate the reviewer’s valuable suggestion. For existing methods (e.g., CSR, RLAIF-V, and SeVA), they are **completely built upon the LLaVA-1.5 codebase**. As these methods **have not been adapted to more advanced architectures** like LLaVA-Next, our primary experiments focused on LLaVA-1.5 for fair comparison. Here, we have also conducted **preliminary experiments with Qwen-VL [1]**. Though limited by the rebuttal time for optimal tuning, the initial results (shown below) support SRC's generalizability across different architectural paradigms: | Method | Overall | CP | FP | IR | LR | | - | - | - | - | - | - | | Qwen-VL | 35.7 | 60.8 | 30.0 | 49.2 | 27.6 | | **+ SRC (ours)** | **38.8** | **64.4** | **35.6** | **50.0** | **29.2** | [1] Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond --- > **The inconsistency of the main results and Figure 9.** We sincerely appreciate the reviewer’s kind attention of our results. We would like to clarify that the discrepancy between Table 1 and Figure 9 **arises from their different experimental settings**: * Table 1 (the main results) reports **final post-training performance** after full SRC post-training, where LVLMs are undergone multiple iterations as detailed in Section 3.4. * Figure 9 shows ablation results of **intermediate iteration** since considering the efficiency of the ablation studies. Below we provide **the experiment results ($\alpha=\{0.8, 0.9\}$, and full training)** for reference: | Training | Math | S&T | | - | - | - | | 0.8 (1 iter, Figure 9) | 30.0 | **29.6**| | 0.9 (1 iter, Figure 9) | **34.0** | 24.8 | | Full SRC, Table 1 | 33.2 | 28.8 | (overall performance of full SRC is the best) We will clearly state the experiment setting difference in the revised version and apologize for any confusion caused by this omission. --- > **General LLMs and R-Scorer during the pair-wise scoring process.** We appreciate the reviewer’s comment on R-Scorer’s strong performance despite its smaller size. While large generic LLMs (e.g., Qwen-72B, LLaMA-3-70B) are trained for broad tasks (e.g., dialogue, math, QA), **R-Scorer is specifically optimized for pairwise scoring in SRC**. Its focused training enables more effective evaluation of rationale quality and factual consistency. This is the reason why its lightweight scale can outperform large generic LLMs in SRC, as supported by our experiments. As shown in Table 5 (Appendix), scaling R-Scorer (1.5B → 7B) indeed improves post-training performance. However, considering **the trade-off between efficiency and marginal gains**, we chose the 1.5B model for formal experiments. --- > **The clarification of the “remaining data”.** We acknowledge that the term "remaining data" was ambiguous and will restructure Section 4.1 to explicitly delineate data usage across all stages. Below is a detailed clarification: **Data Pipeline:** * Our initial data pool comprises 57K samples collected from open-source datasets. * Through rationale augmentation and filtering (Section 3.1), we curated the final 43K samples for SRC. **Data Allocation:** * Rationale Fine-tuning: ~20K samples (Section 3.1). * Preference Fine-tuning: 12K samples for calibration (Section 3.4). * Evaluation: The remaining portion. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I raise my score to accept.
null
null
null
null
null
null
LRA-QViT: Integrating Low-Rank Approximation and Quantization for Robust and Efficient Vision Transformers
Accept (poster)
Summary: This paper presents LRA-QViT, a novel framework integrating low-rank approximation (LRA) and quantization to improve the efficiency and robustness of Vision Transformers (ViTs), particularly for deployment in resource-constrained environments such as edge and mobile devices. The authors introduce Reparameterizable Branch-based Low-Rank Approximation (RB-LRA), which mitigates information loss from LRA via weight reconstruction. Additionally, they propose LRA-aware quantization, which addresses outliers induced by LRA using Weight-Aware Distribution Scaling (WADS) and per-token quantization. The method is validated through extensive experiments on ImageNet, demonstrating superior efficiency-accuracy trade-offs compared to existing compression and quantization baselines. Claims And Evidence: The claims in the paper are well-supported by experimental results: - RB-LRA and RB-LRA + KD improve accuracy over naive LRA by introducing a residual branch and fine-tuning strategy, as shown in Table 1. - WADS outperforms naive post-training quantization (PTQ), SmoothQuant, and QADS on RB-LRA fine-tuned models. - The proposed method enhances practical deployment: The combination of RB-LRA and WADS achieves 1.9×–3.2× inference speedups on mobile devices and 1.5×–2.5× speedups on edge devices, all while maintaining accuracy. Methods And Evaluation Criteria: The methods proposed are well-justified for the problem of efficient ViT fine-tuning and compression. The evaluation is rigorous and includes: - ImageNet classification results across DeiT and Swin Transformer models. - Comparison with prior LRA methods. - Comparison with quantization methods. - Ablation studies on object detection and initialization methods. - Latency analysis on mobile and edge devices, demonstrating real-world applicability. Theoretical Claims: There are no major theoretical claims requiring proof verification. Experimental Designs Or Analyses: The experimental design follows a conventional setting and includes: - Several reasonable baselines (LoRA for fine-tuning and QADS for post-training quantization). - Standard datasets (ImageNet and MS-COCO for downstream tasks). One minor concern is whether it is fair to compare WADS with post-training quantization baselines. Based on the description in the paper, WADS appears to resemble quantization-aware training (QAT) rather than pure post-training quantization (PTQ). Supplementary Material: The supplementary material includes additional ablation studies; for example, the authors present a comparison of quantization errors between Naive PTQ and WADS for each layer. Additionally, the authors provide further comparative results with state-of-the-art methods in the supplementary material (e.g., IGQ-ViT). Relation To Broader Scientific Literature: The paper builds on existing work in LRA and quantization and appropriately cites key references, such as: - LRA techniques in ViTs (e.g., SVD-based decomposition, previous PELA methods). - Quantization methods (e.g., SmoothQuant, RepQ-ViT, QADS). - Knowledge distillation for model compression. The integration of LRA and quantization into a unified framework is the core contribution of the paper. Essential References Not Discussed: The paper adequately discusses most essential references. Other Strengths And Weaknesses: I greatly appreciate the authors' contributions to the LoRA + PTQ framework. The presentation and organization of the paper are well-structured. The experiments effectively demonstrate the impact of each component in the framework step by step: - Comparing RB-LRA and RB-LRA + KD with LoRA. - Comparing WADS with PTQ baselines. - Demonstrating the memory and latency benefits of the framework on both Android and edge GPU platforms. One minor concern is whether it is fair to compare WADS with post-training quantization baselines. Based on the description in the paper, WADS appears more similar to quantization-aware training (QAT) rather than pure post-training quantization (PTQ). Other Comments Or Suggestions: N/A Questions For Authors: - It would be good to clarify how much extra time and GPU resources the scaling optimization requires. - How does WADS compare to alternative quantization-aware training (QAT) methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback and provide the following responses. # A1) Difference from QAT >- Our proposed WADS includes an optimization process distinct from existing PTQ methods, yet it remains fundamentally different from QAT. >- As shown in the right part of Figure 1, WADS first measures the layer-wise weight quantization sensitivity of the RB-LRA-applied model (Eq. 13). > - Then, layers with sensitivity below a certain threshold are selected, and an optimization is performed to find a scaling vector $\alpha$ that mitigates activation outliers (Eq. 14). >- Notably, the distinction between WADS and conventional QAT lies in the following aspect: --- >**1. The method uses only 32 calibration images rather than the full training dataset, consistent with the standard static quantization flow adopted in PTQ methods.** >**2. WADS does not require any quantization-aware weight updates over the entire model; it only involves computing MSE-based quantization error and optimizing the scaling vector $\alpha$.** >**3. As the only optimization target is $\alpha$ the cost is substantially lower than QAT, which involves full model retraining.** --- >- We measured the WADS application time on an A100 GPU. >- Table I presents the optimization time and additional GPU memory required to optimize the scaling vector when applying WADS. >- As a result, the proposed WADS is clearly distinguished from QAT, which requires full training. We kindly refer the reader to Table D in our response to reviewer vjxZ for details on the full-training burden. >- Furthermore, the additional GPU memory overhead remains within 250 MB. >- As the WADS optimization is performed on high-capacity GPU hardware (e.g., 80GB A100), we believe that the additional 250MB memory usage does not present a significant bottleneck. >- Moreover, when deploying to edge or mobile devices, the scaling vector is fixed, eliminating the optimization burden. Therefore, this overhead does not pose a significant limitation in the context of our research objective. >- Accordingly, the comparison between WADS and PTQ methods presented in this paper is justified, and WADS is reaffirmed as a practical and effective alternative within PTQ settings. **Table I** >|Model|Method|GPU Time (s)|Additional GPU Memory (MB)| |-|:-:|:-:|:-:| |DeiT-B|WADS|160.6|192.13| |DeiT-T||121.7|27.24| |Swin-B||392.6|258.33| |Swin-T||131.0|88.73| # **A2) Comparison with QAT** >- Additionally, we analyze the trade-off between model size and accuracy of QAT methods and our proposed method. >- As shown in Table J, our method effectively improves the trade-off between model size and accuracy. >- Moreover, from the perspective of training cost–accuracy trade-off, our method requires significantly less computational burden compared to full training, further demonstrating its superiority. **Table J** > |Model| Method | Prec. | Model Size (MB) | Acc.(%) | > |-|-|-|:-:|:-:| > |DeiT-B|Quantformer [1]|INT4|43.3|79.70| > ||I&S-ViT [2]|INT4|43.3|79.97| > ||Ours|INT8|44.4|80.56| # **A3) Comparison with Other Compression Methods** >- Additionally, we compare the performance of our framework with various compression methods in Table K to demonstrate its superiority. >- We compare performance on the Swin-B model using the INT6 quantization method to ensure a fair comparison at a similar model size. In addition, since baseline performance varies across methods, we compare them in terms of accuracy drop. >- On the Swin-B model, our method demonstrates a superior trade-off between model size and accuracy drop compared to all other INT6 PTQ methods. >- Although I&S-ViT exhibits a slightly smaller accuracy drop, our method achieves a more balanced and efficient trade-off when considering both model size and accuracy. >- Moreover, unlike conventional quantization methods, the RB-LRA shows broad applicability not only to vision tasks but also to various modality tasks, as noted in the responses from reviewers BTg6 and vjxZ. Therefore, we believe it is practically advantageous. **Table K** >|Model|Method|LRA|Quant|Train|Prec.|Model Size(MB)|Baseline Acc.(%)|Acc.(%)|Acc. Drop(%)| |-|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Swin-B|PTQ4ViT||$\checkmark$||INT6|66.1|85.27|84.18| -1.09 | ||APQ-ViT||$\checkmark$||INT6||| 84.01 | -1.26 | ||QDrop [3]||$\checkmark$|$\checkmark$|INT6|||84.33|-0.94| ||I&S-ViT||$\checkmark$|$\checkmark$|INT6|||84.94|-0.33| ||PELA|$\checkmark$||$\checkmark$| FP32|248.8|83.47|82.50|-0.97| ||AAFM|$\checkmark$||$\checkmark$|FP32|240.8||82.68|-0.79| ||Ours|$\checkmark$|$\checkmark$|$\checkmark$|INT8|60.1||82.97|-0.5| # **Reference** >[1] Quantformer: Learning extremely low-precision vision transformers. TPAMI'22 [2] I&S-vit: An inclusive & stable method for pushing the limit of post-training vits quantization. arXiv’23 [3] QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization. ICLR'22 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing detailed responses to my concerns. I maintain my positive view of this paper and will therefore keep my current rating. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your thoughtful engagement with our work and your continued positive assessment. Your acknowledgment of our rebuttal and supportive stance are deeply encouraging. Thank you once again for your time and for the constructive feedback throughout the review process. Best regards
Summary: This paper introduces a novel framework that integrates reparameterizable branch-based Low-Rank Approximation (RB-LRA) with Knowledge Distillation (KD) to reduce the number of parameters and inference computational complexity. Additionally, the authors propose an LRA-aware post-training quantization method to enhance the performance of the model after quantization. The experimental results on the image classification task demonstrate that the proposed method achieves promising performance in both full-precision and 8-bit quantized models. ## update after rebuttal Thank you for the authors' rebuttal. The additional experimental results have demonstrated the promising performance of the proposed method. However, regarding points A1) and A2), my intention was to highlight that, as shown in Table 7, the authors' method applies 8-bit quantization to the model after RB-LRA compression, while the comparison methods quantize the ViT to 4 bits. There is no direct comparison or analysis of the computational complexity between 8-bit RB-LRA and 4-bit ViT in this context. If the authors' currently available hardware does not support 4-bit quantization, a theoretical analysis of computational costs could still be conducted to evaluate the potential practical efficiency of this technique. This is particularly relevant given that 4-bit quantization is increasingly feasible for deployment on edge devices in many resource-constrained scenarios. Claims And Evidence: Some claims are not well-supported in the current version: 1. Table 7 demonstrates that the proposed method, when integrated with knowledge distillation (KD) and low-rank adaptation (LRA)-aware INT8 quantization, achieves model sizes comparable to those of INT4 quantized ViTs. However, the paper lacks an analysis of the actual computational complexity and inference speed of these models. Specifically, it would be beneficial to evaluate the real-world inference speeds of the proposed method compared to the baseline methods in Table 7 under identical hardware and model size conditions. This information is crucial for assessing the practical applicability of the proposed framework, as retraining such a model incurs significant computational costs. A well-balanced trade-off analysis between training overhead and inference efficiency would strengthen the paper’s practical contributions. 2. While the authors highlight the strong performance of their method when combined with INT8 quantization—surpassing some INT4 post-training quantization (PTQ) approaches, as shown in Table 7—there is no direct comparison between their proposed quantization method and these state-of-the-art PTQ methods. For instance, applying AdaLog [2] to the proposed full-precision model and then comparing its performance under INT8 quantization in Table 2 would provide a more comprehensive understanding of the relative strengths and limitations of the proposed approach. 3. The evaluation of the proposed method is primarily focused on image classification. A broader assessment across other tasks, such as semantic segmentation and object detection, would further substantiate its effectiveness. For example, baseline LRA methods like PELA [1] have demonstrated competitive results in these tasks, and comparing the proposed framework against them in such scenarios would provide valuable insights into its generalizability. References [1] Guo et al. PELA: Learning Parameter-Efficient Models with Low-Rank Approximation . In CVPR 2024. [2] Wu et al. AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer . In ECCV 2024. Methods And Evaluation Criteria: Some of them are not very satisfactory in current version: 1. While the authors emphasize the robust performance of their method when integrated with INT8 quantization—outperforming certain INT4 post-training quantization (PTQ) techniques, as demonstrated in Table 7—there is a notable absence of direct comparisons between their proposed quantization method and these state-of-the-art PTQ approaches. 2.The evaluation of the proposed method primarily centers on image classification. Expanding the assessment to include additional tasks, such as semantic segmentation and object detection, would further validate its effectiveness. Notably, baseline LRA methods like PELA [1] have achieved competitive performance in these domains. A comparative analysis of the proposed framework against such methods in these contexts would offer valuable insights into its generalizability. [1] Guo et al. PELA: Learning Parameter-Efficient Models with Low-Rank Approximation . In CVPR 2024. Theoretical Claims: The paper does not include any proofs or theoretical claims. Experimental Designs Or Analyses: Some of them are not very conviencing. Please see my comments in "Methods And Evaluation Criteria" Supplementary Material: I reviewed all of them. Relation To Broader Scientific Literature: The paper extends prior work on low-rank approximation (LRA), knowledge distillation, and quantization in vision transformers (ViTs) by addressing key limitations in these areas. While existing LRA methods (e.g., PELA) reduce parameters through matrix decomposition, they often suffer from accuracy loss. The proposed reparameterizable branch-based LRA (RB-LRA) mitigates this by introducing weight reconstruction. Additionally, knowledge distillation, widely used in ViT compression (e.g., DeiT), is integrated to further enhance accuracy. Unlike prior LRA methods that overlook quantization, the paper introduces an LRA-aware quantization strategy to handle large outliers caused by decomposition. Empirical results on ImageNet demonstrate superior efficiency-accuracy trade-offs, making the framework highly relevant for real-world deployment. Essential References Not Discussed: NA Other Strengths And Weaknesses: The proposed method shows promising performance on image classification tasks while significantly reducing inference complexity, which is a valuable contribution to ViT compression. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback. Refreshing the page (F5) helps generate equations properly! # A1) Computational Complexity Analysis >- We analyze the computational complexity (i.e., FLOPs) and inference speed improvements. >- Furthermore, Table 3 demonstrates the actual acceleration achieved on mobile and edge devices. >- Our efficiency gains stem from: > ### 1. RB-LRA reduces computational complexity >- As $\text{FLOPs} \propto \text{parameter size}$, complexity is reduced by a factor of $\frac{r(m+n)}{mn}$, which is $\ll 1$. >- Our method effectively reduces FLOPs by decreasing the number of parameters, as detailed in Table 1. >### 2. INT8 operation compatibility enabled by quantization >- Specifically, when the applied quantization method is implemented using real-quantization (i.e., actual low-bit integer arithmetic rather than simulated quantization), it can be formulated as follows: > $$y_{[t, o]} = \sum_{t=0}^{T} \sum_{o=0}^{OC} \frac{1}{S_W[o] \cdot S_x[t]} \left( \sum_{i=0}^{IC} W_q[i, o]x_q[t, i] \right)$$ >- $S_W[o]$ and $S_x[t]$ denote the weight scale factor and the activation scale factor, respectively >- With compiler support, linear layers that utilize this feature can be converted into integer operations on edge devices, thereby accelerating inference. # A2) Comparison of Inference Speed >- We measured the inference speed of the 4-bit PTQ methods on a real edge device (NVIDIA Xavier) and the results can be found in Table F. >- In Table F, latency refers to the inference latency on edge devices, while GPU time indicates the optimization time of RB-LRA + WADS and each quantization method in a GPU environment. >- As a result, the INT4 PTQ methods did not yield meaningful inference speed-up. >- This is because edge devices, such as the Jetson Xavier series, provide hardware acceleration up to INT8 precision. >- That is, even with INT4 quantization in software, edge devices perform operations similar to INT8 in practice. >- To the best of our knowledge, INT4 is unsupported on edge devices and only partially supported on high-end GPUs (e.g., H100). >- Ultimately, the proposed framework enables practical INT8 acceleration on commercial edge devices while achieving INT4-level model compression, offering greater deployment benefits. # A3) Training Cost, Accuracy Trade-off >- Table F shows trade-offs among FT cost, accuracy, and speed. >- The proposed method incurs the highest training cost compared to existing methods. However, it demonstrates a corresponding improvement in accuracy that justifies the cost. >- Additionally, our method achieved the highest accuracy and inference speed on real-world devices. >- In edge/mobile scenarios, training is typically performed on GPU servers, while inference is conducted on resource-constrained devices. Therefore, the trade-off between accuracy, latency, and model size becomes particularly critical in such environments. >- Notably, our proposed method requires only about 6 hours, which is significantly lower compared to the full baseline training cost. These results demonstrate that our method performs effectively in real-world deployment scenarios. **Table F** >|Model|Method|Prec.|Latency (ms)|Model Size (MB)|GPU Time|Acc.(%)| |-|-|-|:-:|:-:|:-:|:-:| | DeiT-B|FQ-ViT|INT4|110.9|43.3| **77 s** |64.39| ||RepQ-ViT|INT4|96.8|43.3|247 s|75.61| ||AdaLog|INT4|113.8|43.3|2h 47m|78.03| ||Ours|INT8|**59.4**|44.4|5h 56m|**80.56**| # A4) Compatibility between RB-LRA and SOTA PTQ Methods >- First, Table 2 already presents performance comparisons of the proposed RB-LRA model combined with QADS, SmoothQuant, RepQ-ViT, and WADS. >- We evaluate INT8 quantization performance by applying methods listed in Table 7, such as AdaLog, to the RB-LRA model. >- Since the code for APQ-ViT, ADFQ-ViT, and IGQ-ViT is not publicly available, we apply AdaLog and FQ-ViT to RB-LRA instead. >- As shown in Table G, WADS shows better compatibility with RB-LRA than existing PTQ methods. **Table G** > |Model|Prec.|Method|Acc.(%)| |-|-|-|-| |DeiT-B|FP32|Baseline(RB-LRA)|81.12| ||INT8|AdaLog|79.27| |||FQ-ViT|77.70| |||WADS|**80.56**| # A5) Object Detection / Instance Segmentation Task >- We conduct experiments on object detection and instance segmentation, with results shown in Table 4. >- While a direct comparison with methods like PELA is difficult due to unavailable code and differing baselines (e.g., Swin-B vs. our Swin-T), we instead use accuracy drop as an indirect metric. >- As shown in Table H, our method shows comparable degradation, indicating that RB-LRA effectively maintains performance even with smaller models, making it competitive for mobile and edge deployment. **Table H** > |Method|Backbone|Model|Baseline$AP^{box}$|Baseline$AP^{mask}$|$AP^{box}$|$AP^{mask}$| |-|-|-|:-:|:-:|:-:|:-:| |AAFM|Swin-B|Cascade Mask R-CNN|52.0|45.0|51.9|44.7| |PELA|Swin-B|Cascade Mask R-CNN|50.1|-|49.0|-| |RB-LRA|Swin-T|Mask R-CNN|42.7|39.3|42.5|39.0|
Summary: This paper proposes RB-LRA, a low-rank approximation scheme integrated with quantization to reduce the number of parameters in vision transformers (ViTs) and mitigate inference delay. To minimize approximation errors introduced by singular value decomposition (SVD), RB-LRA employs block-level knowledge distillation (KD) to fine-tune the approximated matrices. Additionally, to integrate quantization with minimal performance degradation, RB-LRA leverages weight-aware distribution scaling (WADS) to suppress weight outliers. Experimental results on ImageNet with pretrained ViTs demonstrate that RB-LRA effectively reduces the number of parameters while maintaining high accuracy. Claims And Evidence: 1. The proposed RB-LRA significantly reduces LRA-induced errors by fine-tuning the approximated matrices using knowledge distillation. The accuracy drops on ImageNet for various ViT models (DeiT-T, DeiT-B, Swin-T, Swin-B) are -0.47%, -0.73%, -0.88%, and -0.03%, respectively. 2. The proposed RB-LRA effectively reduces quantization error through weight-aware distribution scaling (WADS), as validated on ImageNet across several ViT architectures. 3. The proposed RB-LRA significantly reduces inference latency by utilizing quantized approximated parameters. Experiments on Android and Xavier platforms demonstrate the effectiveness of RB-LRA. Methods And Evaluation Criteria: 1. The primary evaluation criteria include image classification accuracy on ImageNet and average precision (AP) on the COCO dataset for object detection. Model size is also considered, but it remains the same across all schemes using low-rank approximation. 2. The latency time criteria can validate the effectness of RB-LRA to accelerate inference. Theoretical Claims: There is no theoretical contribution in this paper. Experimental Designs Or Analyses: 1. The experiments are comprehensive across several model architectures, bitwidth and vision tasks (classification, object detection). However, there is no analyses on the additional computation for fine-tuning the approximated matrices and optimizing the scaling factor in Eq. 14. It is more fair to compare both computation overhead and the accuracy improvement. Supplementary Material: I read all the contents in the appendix but did not run the code. Relation To Broader Scientific Literature: The contribution is relatively incremental, as this paper primarily integrates widely-used techniques such as knowledge distillation and activation-aware quantization. The novelty of the work is quite limited. Essential References Not Discussed: This paper has cited most of the relevant literature. Other Strengths And Weaknesses: 1. The novelty is limited, as no new techniques are proposed in this work. 2. The proposed RB-LRA needs extra computation for fine-tuning with data, which restricts the application of this technique. Other Comments Or Suggestions: 1. Low-rank approximation can also be applied to language models. Evaluating the effectiveness of RB-LRA on LLMs would be valuable, as numerous text-based benchmark tasks are available. 2. Please indicate every notation after using it in the formula. For example, the $\tilde{V}$ and $\tilde{U}$ in Eq.5. You should explain what is the meaning of these two notations and why you set up in this way. Questions For Authors: How do you select the rank $r$? Are you selecting all the positive singular values? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback and provide the following responses. # A1) Additional Computation Analysis >- RB-LRA and WADS require one-time fine-tuning (FT) and calibration only during pre-deployment, without inference overhead. >- As shown in Table D, we measured the FT time of RB-LRA and compared it with the baseline training time. The optimization time for WADS is provided in Table I in our response to reviewer MFof. >- FT time and baseline training time were measured on a single A100 GPU (batch size 64, 300 epochs for baseline) >- RB-LRA converges within 100 epochs, requiring only 15% ~ 21% of baseline training time. >- Moreover, all FT and calibration were conducted on public datasets in server-like settings, aligned with common industrial workflows >- Overall, our method adds only minor training overhead while achieving both high accuracy and clear gains in post-deployment efficiency, which is central to our contribution. >- Moreover, we analyze the trade-off between FT overhead, inference speed, and accuracy improvement. >- The results can be found in Table F in our response to reviewer xvP8. We encourage the reviewer to refer to it **Table D** > |Model|Method|GPU Time|GPU Memory (GB)| > |-|-|-|-| > |DeiT-T|Baseline|20h|5.9| > ||RB-LRA|4h 21m|7.2| > |Swin-T|Baseline|35h|16.0| > || RB-LRA|5h 19m|19.2| # A2) Clarifying Novelty: Beyond Simple Integration > Our contribution lies not in simply combining existing methods, but in the distinct novelty of each component and their cohesive integration into a unified framework. > 1. **RB-LRA + WR** >- SVD-based LRA suffers from accuracy loss due to discarded components. >- We introduce a reparameterizable residual branch that guides FT toward optimal accuracy. >- During inference, it merges into a unified form, adding no extra parameters or computations. >- This design helps preserve accuracy while reducing memory, suitable for edge deployment. > 2. **WADS** >- While activation-aware quantization methods exist, we are the first to empirically analyze the emergence of channel- and token-level outliers following LRA. >- We propose a weight-aware scaling method guided by quantization loss to effectively suppress these outliers. > 3. **Originality in Integrated Design** >- Prior works typically address LRA or quantization separately. >- We instead propose a unified, deployment-oriented framework that systematically combines novel components such as RB-LRA and WADS. # A3) Language Application >- Our method specifically targets the compression of linear layers, and we believe it can be broadly applied across various domains that utilize transformer-based models. >- In response to the reviewer’s comment, we applied RB-LRA to GPT-2 Medium on WikiText-103 using pre-trained weights from Huggingface. >- To further demonstrate generalizability, we applied RB-LRA to the Conformer [1] model for automatic speech recognition (ASR). >- For the ASR task, we used a baseline model trained from scratch on the LibriSpeech 960h dataset using our own training pipeline >- As shown in Table E, despite compressing model parameters by 25–30%, our method resulted in only marginal accuracy drops across both NLP and ASR tasks These results demonstrate the strong extensibility of RB-LRA to various transformer-based models and tasks. >- We'll include these results in the camera-ready version to demonstrate our method's generality and competitiveness. **Table E** > |Model|Dataset|Method|Params|Perplexity (↓)| > |-|-|-|:-:|:-:| > |GPT2|Wikitext-103|Baseline|354.8|18.72| > |||RB-LRA|249.4|19.51| > |Model|Dataset|Method|Params|Test-Clean WER (↓)| > |-|-|-|-|-| > |Conformer|Librispeech|Baseline|116.9|5.4| > |||RB-LRA|86.2|5.6| # A4) Writing Clarity & Notation Explanation >- The matrices $\tilde{U}$ and $\tilde{V}$ in Eq. (5) refer to trainable low-rank residual branches introduced to compensate for information loss during the standard SVD-based LRA. >- We provide a theoretical explanation of this setup in our response to Reviewer BTg6 and kindly refer the reviewer to that section for further clarification. >- If granted the opportunity to submit a camera-ready version, we will define these notations at their first occurrence and provide a brief explanation of their role and motivation in the Proposed Method section to enhance clarity. # A5) How to select the rank(r)? >- We determined the rank (r) per layer based on the target model size. >- Specifically, we retained only the top singular values after SVD to meet the desired compression ratio. >- Since singular values are defined as the square roots of eigenvalues of a positive semi-definite matrix, they are always positive and typically sorted by magnitude for truncation. >- The exact rank settings are available in our code, and we will include them in the paper for reproducibility in the camera-ready. # Reference > [1] Conformer: Convolution-augmented Transformer for Speech Recognition. Interspeech'20 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. All of my concerns have been addressed, I just raised my original score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you so much for taking the time to carefully read our rebuttal and for reconsidering your evaluation. We truly appreciate your thoughtful engagement and your updated assessment. Your constructive feedback and willingness to re-evaluate played a vital role in helping us improve and clarify our work. We're sincerely grateful for your open-mindedness and for contributing to a fair and productive review process. Thank you again for your time and effort. Best regards
Summary: The authors propose RB-LRA method which introduces a reparameterizable residual branch to compensate for information loss due to LRA. Weight reconstruction (WR) initializes the residual branch with weights discarded during decomposition, mitigating accuracy loss. To further improve accuracy, the method incorporates feature-based KD. Applying LRA creates large activation outliers, leading to increased quantization errors. So, they propose WADS which identifies layers where weight quantization errors are significant and applies per-token quantization and channel scaling to minimize accuracy loss during quantization. With these, they can achieve significant compression and latency improvements on the Swin-B and DeiT-B models with minimum loss in accuracy. Claims And Evidence: The paper claims that 1. RB-LRA effectively reduces parameters with minimal accuracy drop 2. Knowledge distillation enhances RB-LRA accuracy 3. WADS effectively reduces Quantization induced accuracy loss 4. Their method achieves significant inference speedups These claims are supported by experimental evidence The paper claims to provide a compression solution that applies to ViT models in general, but only the Swin and DeiT models are tested at tiny base sizes. Further evidence on other models such as the ViT model and larger sized models is yet to be seen. Methods And Evaluation Criteria: They test their method across various baselines on the ImageNet classification task and the Detection and Segmentation tasks on the MSCOCO dataset. Inference speeds are tested on Android and on the NVIDIA Jetson Xavier platforms to measure improvements in the real use cases. Theoretical Claims: They claim that the SVD-based initialization of the parameterizable branch reduces information loss compared to LRA. However, no theoretical analysis of this procedure is given. Experimental Designs Or Analyses: They cover a wide variety of experiments validating their claims across classification, detection, and segmentation tasks. Their ablation studies in the main paper and appendix shine light on the importance of various design choices they make throughout the paper. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper builds on prior research in low-rank approximation (LRA), knowledge distillation (KD), and quantization for Vision Transformers (ViTs) by introducing RB-LRA, a parameterizable branch that mitigates information loss from LRA, and WADS, an LRA-aware quantization method that reduces accuracy degradation from outliers. Unlike existing SVD-based LRA methods (Kumar, 2022; Zhang et al., 2023), which suffer from accuracy drops, RB-LRA reconstructs lost weight information through a residual branch. The paper also improves upon prior KD-based LRA methods (Guo et al., 2024) by introducing block-wise KD and optimizing fine-tuning at the transformer block level. Compared to SmoothQuant (Xiao et al., 2023) and RepQ-ViT (Li et al., 2023), WADS integrates per-token quantization and weight-aware scaling, significantly improving robustness for compressed models. Additionally, the paper demonstrates practical improvements in inference speed (1.9×–3.2× on mobile, 1.5×–2.5× on edge devices), making it a comprehensive and efficient compression framework for real-world deployment. Essential References Not Discussed: PTQ4ViT: Post-training quantization for vision transformers with twin uniform quantization Other Strengths And Weaknesses: Other strengths: 1. The ideas in the paper are clearly presented making it easier for the reader to understand the concepts 2. The paper draws from many methods that boost efficiency in transformers such as LRA, quantization, activation scaling etc., 3. They provide performance numbers measured on actual devices making their method ready for immediate deployment. 4. They cover a wide variety of baseline methods and ablation studies Other weaknesses 1. The paper can improve with a theoretical analysis of the design choices they make 2. Testing their algorithm on larger models such as ViT-H, ViT-L etc., 3. Does their method also work on transformers for Generative modeling such as DiT or is it only restricted to discriminative models? Other Comments Or Suggestions: See strengths and weaknesses above Questions For Authors: See strengths and weaknesses above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback. Refreshing the page (F5) helps generate equations properly! # A1) Evaluation of Large-Scale Models >- As suggested by the reviewer, we evaluate our framework on ViT-L for large-scale models. >- Experimental results show that applying the RB-LRA method improves accuracy by 0.67%. >- Furthermore, RB-LRA with WADS achieves 0.18% higher accuracy compared to the baseline while compressing the model by 86.3%, as shown in Table A. **Table A** > |Model|Model Size (MB)|Method|Acc.(%)| |-|:-:|-|-| |ViT-L|1217.2|Baseline|85.84| ||669.2|RB-LRA|86.51| ||167.3|RB-LRA + WADS|86.02| # A2) Theoretical Analysis >- We provide a brief theoretical analysis of RB-LRA and WR in response to the reviewer's comment. >- We first define the error based on the Frobenius norm as follows: $$W_r = \sum_{i=1}^{r} \sigma_i u_i v_i^{T}, \quad \lVert W - W_r \rVert_F^2 =\left\lVert \sum_{i=r+1}^{\min(m,n)} \sigma_i u_i v_i^{T} \right\rVert_F^2 = \sum_{i=r+1}^{\min(m,n)} \sigma_i^2 $$ >- The error in our RB-LRA method is expressed as follows: $$\lVert W - (W_r + E) \rVert_F^2 =\left\lVert \sum_{i=r+1}^{\min(m,n)} \sigma_i u_i v_i^{T} - E \right\rVert_F^2$$ >- Thus, the optimization objective can be formulated as: $$\min_{E} \lVert W - (W_r + E) \rVert_F^2$$ >- While $E$ can be directly computed, its shape matches the original weight, increasing the size from $\mathcal{O}(MN)$ to $\mathcal{O}(2MN)$ and defeating the purpose of low-rank approximation >- We thus design a reparameterizable error matrix using a residual branch: $$\tilde{E} = \left( V \tilde{U}^{T} + \tilde{V} U^{'T} + \tilde{V} \tilde{U}^{T} \right)$$ >- The error matrix guides fine-tuning (FT) toward optimal accuracy. >- In the setting of Eq. (5), the initial $\tilde{E}$ matrix with WR applied is expressed as follows: $$\tilde{E} = {V}U^{'T} \approx \sum_{i = r+1}^{r+l} \sigma_i u_i v_i^{T}, \quad \text{with } l < \min(m,n) - r $$ >- Finally, the resulting error incorporating this component is expressed as follows: $$W_{\text{WR}} = W_r + \tilde{E} \approx \sum_{i = 1}^{r+l} \sigma_i u_i v_i^{T}$$ >- Consequently, the information loss based on the Frobenius norm for the conventional LRA and the proposed RB-LRA with WR can be compared as follows: $$\lVert W - (W_r + \tilde{E}) \rVert_F^2 < \lVert W - W_r \rVert_F^2 = \sum_{i = r+1}^{\min(m,n)} \sigma_i^2$$ >- Ultimately, the proposed initialization method can be seen as better preserving the information in the discarded parameter dimensions. >- We further provide a theoretical analysis of how the proposed method achieves optimal performance through FT. >- Discarded singular vectors may retain task-relevant information, leading the gradient to exhibit positive projections onto them during FT. >- Based on this observation, it is reasonable to assume a positive correlation between the discarded singular vectors and the gradient. >- Accordingly, when $\tilde{V}$ is WR-initialized, it satisfies: >$$ \mathbb{E}\left[\left\langle \tilde{V}, \nabla_W \mathcal{L} \right\rangle_F\right] = \sum \sigma \cdot \mathbb{E}\left[\left\langle \nabla_W \mathcal{L}, v \right\rangle\right] > 0 $$ >- Whereas random initialization $\tilde{V} \sim \mathcal{N}(0, 1)$ yields: $$ \mathbb{E} \left[\left\langle \tilde{V}, \nabla_W \mathcal{L} \right\rangle_F\right] = \sum \mathbb{E}\left[\tilde{V}\right]\left(\nabla_W \mathcal{L}\right) = 0 $$ >- The WR method can be seen as being aligned with the gradient, suggesting that it facilitates faster convergence toward the optimal point in the loss landscape. >- The effectiveness of our method is demonstrated in Table 5. # A3) Comparision with PTQ4ViT >- We compare our method with PTQ4ViT as requested. >- As shown in Table B, at a similar compression ratio, our method yields a smaller accuracy drop on DeiT-B, indicating a better trade-off. **Table B** >|Model|Method|Prec.|Model Size (MB)|Acc.(%)| |-|-|-|:-:|-| |DeiT-B|PTQ4ViT|INT4|43.3|64.30| ||Ours|INT8|44.4|**80.56**| # A4) Applicability to Diverse Tasks >- Although we attempted to evaluate our method on DiT-XL/2, its architecture demands extensive resources and time, making it infeasible within the rebuttal period. > Specifically, one epoch takes 40 hours on a single A100 GPU. >- As an alternative, we demonstrated the extensibility of our method by applying it to GPT-2. > The results can be found in Table E in our response to reviewer vjxZ >- We also evaluate performance on a pose estimation task to demonstrate applicability across diverse tasks >- Specifically, we apply RB-LRA to the ViTPose [1] model and evaluate its performance on the COCO dataset. >- As shown in Table C, we achieve a compression ratio of 25% with less than 1% AP drop, demonstrating the versatility of our method. **Table C** >|Model|Method|Params|AP|AR| |-|-|:-:|-|-| |ViTPose|Baseline|89.9|75.9|81.0| ||RB-LRA|66.8|75.0|80.4| # Reference >- [1] Vitpose: Simple vision transformer baselines for human pose estimation. NeurlPS'22
null
null
null
null
null
null
Learning Imperfect Information Extensive-form Games with Last-iterate Convergence under Bandit Feedback
Accept (poster)
Summary: The authors propose an efficient algorithm (namely, with closed form solution update) which attains a last-iterate convergence rate of order $K^{-1/8}$ when run in self-play and when only bandit feedback is available. They also provide a lower-bound on the convergence which does not match the rate attained by the algorithm. Claims And Evidence: The result seems convincing and reasonable. Methods And Evaluation Criteria: The proposed method and evaluation criteria are standard. Theoretical Claims: I did not check the correctness of the proof, in details. Experimental Designs Or Analyses: I checked the experimental results in the Appendix. Supplementary Material: I reviewed the Appendix for the experiments and the proofs main ideas. Relation To Broader Scientific Literature: The only contribution of this works is to extend previous results in matrix games to extensive form games. Essential References Not Discussed: The related references are cited in the work. Other Strengths And Weaknesses: The main strength of the is work is that the authors show the first result on last-iterate convergence in EFGs with bandit feedback and uncoupled dynamic. Nonetheless, I have concerns regards the technical novelty of this work. Specifically, the techniques seem to me mainly adapted from (Cai et al. 2023), which established almost identical result for matrix games with bandit feedback. Indeed, the are some challenges in dealing with EFGs; nevertheless, the techniques employed to generalise (Cai et al. 2023) to EFGs are mainly adapted from (Kozuno et al. 2021) (e.g., notice that, closed form solution for OMD is not surprising in EFGs, when proper distance generating function are employed). Overall, I believe that the contribution of this paper is not enough to meet the acceptance bar of a conference such as ICML. Other Comments Or Suggestions: No additional comments. Questions For Authors: Which are the fundamental differences of this work with respect to (Cai et al. 2023)? I am willing to increase my score if the answers are pretty positive. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments. Our response to each question is provided below. **Q1. Specifically, the techniques seem to me ... when proper distance generating function are employed).** Thank you for this comment. Indeed, our analysis scheme is inspired by [1], as we have mentioned in Remark 5.5 of our paper. However, we would also like to remark that a key distinction between our work and [1] is that [1] uses the *vanilla negentropy regularizer* while we use a *virtual transition-weighted negentropy regularizer*. Simply using the vanilla negentropy regularizer in our problem can still guarantee a finite-time last-iterate convergence result as mentioned in Remark 5.4 of our paper, but doing so will not be able to obtain computationally efficient approximate policy updates. In contrast, with the leverage of the virtual transition-weighted negentropy regularizer, our algorithm simultaneously permits efficient approximate policy updates (please see the end of Section 4.1 and Appendix A in our paper for details) and guarantees a meaningful last-iterate convergence. Besides, we fully agree that closed-form updates for OMD are common in IIEFGs. And it is precisely for this reason that we would like to make our algorithm computationally efficient as well. Nevertheless, we would like to note that, though dilated negentropy is common in existing literature studying IIEFGs (say, [2,3,4]), we did not use the dilated negentropy as the regularizer in our algorithm. Though it is well known that dilated negentropy admits efficient closed-form policy updates, using dilated negentropy in our problem can only lead to a vacuous convergence rate that is exponentially large in $X$, as illustrated in Section 4.1 of our paper. In contrast, as aforementioned, we leverage the negentropy weighted by the virtual transition over the infoset-action spaces. Importantly, this virtual transition is specifically designed to maximize the minimum sequence-form "visitation probability" of all the infoset-action pairs, so as to ensure a last-iterate convergence with meaningful dependence on $X$. Our virtual transition is directly computed by our Algorithm 2, and the construction of such a virtual transition is never exploited in existing literature for studying IIEFGs, let alone in [1]. Therefore, we believe our algorithmic design as well as the result is meaningful and valuable to the literature. We hope the above explanations would be helpful to address the concerns of the reviewer about our technical contributions, and we will explicitly incorporate the above discussions into the revision of our paper for better clarity. **Q2. Which are the fundamental differences of this work with respect to (Cai et al. 2023)?** Please see our response to the question above. [1] Cai et al. Uncoupled and convergent learning in two-player zero-sum markov games with bandit feedback. NeurIPS, 2023. [2] Hoda et al. Smoothing techniques for computing nash equilibria of sequential games. MOR, 2010. [3] Kroer et al. Faster First-Order Methods for Extensive-Form Game Solving. EC, 2015. [4] Kozuno et al. Learning in two-player zero-sum partially observable markov games with perfect recall. NeurIPS, 2021 --- Rebuttal Comment 1.1: Comment: I would like to thank the Authors for the detailed response. Nonetheless, after reading the rebuttal and the answer to Question 2 (Q2) of Reviewer T2uG, I am still not convinced that the paper has sufficient technical contribution. Thus, for now, I will keep my current score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the prompt response.
Summary: The paper proposes an algorithm for two-player zero-sum extensive-form games (2p0g) with bandit feedback. Under the self-play setting, the last-iterate of a profile computed by the proposed algorithm converges to the Nash equilibrium in a rate of $k^{-1/8}$ (or $k^{-1/6}$ in expectation). The main innovation is the use of a negentropy regularization instead of dilated entropy regularization frequently used in other algorithms for 2p0g, combined with a novel technique called virtual transition. ## update after rebuttal I thank the authors for the rebuttal responses. I am convinced by the authors' explanation and raised my score to "Accept". Claims And Evidence: The following is a list of claims. 1. The first algorithm for 2p0g under the bandit feedback setting with finite-time last-iterate convergence guarantee. 2. Their algorithm still admits a closed form solution for policy updates over the full policy space. 3. Their algorithm does not require any communication or coordination between the two players and is model-free 4. A sample complexity lower bound for the last-iterate convergence rate. Regarding 1, I would like to mention that there exists an algorithm for 2p0g under the __noisy-feedback setting__ with finite-time last-iterate convergence guarantee. Indeed, [Abe et al. (2024)](https://proceedings.mlr.press/v235/abe24a.html) provides one. I recommend the authors to discuss it, especially why deriving a sample complexity bound like the one in the paper is difficult to derive. Regarding 2 and 3, they are supported by Appendix A and Algorithm 1, respectively. Regarding 4, the stated lower bound is correct and coincides with one obtained previously in Fiegel et al. (2023). Methods And Evaluation Criteria: Experiments are conducted in standard games, as shown in Figure 1, which compares NE gap of different algorithms after different episodes. Theoretical Claims: No, but all theoretical results seem to be reasonable. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: The paper contributes to scientific fields related to sequential decision making since the paper deepens the understanding of solving 2p0g with algorithms whose last iterate converges. It is currently an active research area. Essential References Not Discussed: Maybe [Abe et al. (2024)](https://proceedings.mlr.press/v235/abe24a.html). Other Strengths And Weaknesses: As I noted above, a similar result is obtained in [Abe et al. (2024)](https://proceedings.mlr.press/v235/abe24a.html), that is, an algorithm with finite-time last-iterate convergence guarantee under the noisy feedback setting. However, they do not specifically consider 2p0g and issues related to loss estimates. This paper picks up 2p0g and handles bias and variance trade-off resulting in a result closer to a practical setting. Furthermore, the paper improves the existing convergence rate result of [Abe et al. (2024)](https://proceedings.mlr.press/v235/abe24a.html) from $\mathcal{O}(k^{-1/10})$ to $\mathcal{O}(k^{-1/8})$ with a completely different idea. Regarding the lower bound, it seems it has been already shown in Fiegel et al. (2023). The reason why I recommend weak acceptance is that the derived bound is not close to a lower bound, and it is not really clear why virtual transition is necessary. Other Comments Or Suggestions: As noted above, Abe et al. (2024) provides a similar result, and it is nice to cite and discuss it. In addition, it would be nice to have some illuminating example and discussion on why virtual transition is necessary. Questions For Authors: Q1. Would you explain how the lower bound in this paper differs from the one in Fiegel et al. (2023)? Since the latter one does not assume algorithm to be used, its lower bound applies to the setting considered in this paper too. Also, while their theorems states only sample complexity, they actually derive rate lower bounds in their appendix and use them to derive sample complexity lower bounds. Q2. This paper states that "extracting (tree structure) from one traversal on the game tree" is easy, quoting Bai et al. (2022). However, I really did not understand what exactly this means (even when I read Bai et al. (2022) before). What does one traversal mean? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. Our response to each question is provided below. **Q1. Additional References.** Thank you for referring to this! We compare our work with some notable works studying achieving last-iterate convergence in games with noisy feedback [1,2,3]. Specifically, [1,2,3] establish algorithms for solving two-player zero-sum matrix games or multi-player monotone games with noisy gradient feedback, where the noisy feedback for all the actions is observable. In contrast, we study learning IIEFGs with bandit feedback. That is, only the rewards of the experienced infoset-action pairs are revealed to the players. For the infoset-action pairs that are not experienced in one episode, no information regarding them is revealed, not even the noisy rewards. Hence, their algorithms are not directly applicable to our problem, and our results might not be directly comparable to theirs. Actually, in Sec. 8 of [2], the authors say that bandit feedback is a scheme with more limited feedback, and in Sec. 8 of [3], it is also mentioned that extending their results from the noisy feedback setting to the bandit feedback is a challenging future direction. We will explicitly incorporate the above discussions in the revision of our paper for completeness. **Q2. "it would be nice to ... why virtual transition is necessary."** Thanks for this comment! The primary effect of using a virtual transition-weighted negentropy regularizer is to ensure the algorithm can be approximately updated in a computationally efficient manner. Indeed, using a vanilla negentropy regularizer without the virtual transition can also lead to a finite-time last-iterate convergence for our problem, but in this way, the policy cannot be efficiently updated, as illustrated in Remark 5.4. Particularly, in matrix games, the policy space is simply the probability simplex over all the actions, and it is well known that using OMD/FTRL with vanilla negentropy regularizer admits a closed-form multiplicative update (see, e.g., Chap. 28 of [4]). However, in general IIEFGs, this is no longer the case since each sequence-form policy is not a probability measure over infoset-action space. Fortunately, the inner products between sequence-form virtual transitions and sequence-form policies are still probability measures over infoset-action space. Therefore, using a virtual transition-weighted negentropy regularizer permits efficient policy updates (please see Prop. A.1 in Appendix A). On the other hand, the downside of using a virtual transition-weighted negentropy regularizer is that it will enlarge the stability term (approximately) by a factor of $\frac{1}{p_{1: h}^x(x_h)}$ (please see Lem. D.1 in Appendix D). Therefore, it necessitates choosing a good virtual transition with $p_{1: h}^x(x_h)$ well lower-bounded, which is guaranteed by our Algorithm 2 (please see Lem. B.1 for details). We will incorporate the above explanations in our revised paper for clarity. **Q3. Lower Bound.** The main difference is that our lower bound only applies to the class of algorithms with last-iterate convergence. Technically, as shown in Appendix H, the proof idea of our lower bound is that any algorithm with $\Theta(k^{-\alpha})$ ($\alpha\in(0,1)$) last-iterate convergence can also guarantee a $\Theta(K^{1-\alpha})$ regret. Thus, we can prove the lower bound for the last-iterate convergence by contradiction using the existing regret minimization lower bound of $\Omega(\sqrt{XAK})$ in learning IIEFGs with bandit feedback (say, Thm. 6 in [5]; Thm. 3.1 in [6]). Our reduction is specifically designed for the class of algorithms with last-iterate convergence and does not directly work for the class of algorithms only with average-iterate convergence. On the other hand, we note that the hard instance used in our lower bound of last-iterate convergence is the same as that in [6], as the hard instances in the proofs of lower bounds of regret minimization and sample complexity are the same [5,6]. We will further clarify this in the revision of our paper. **Q4. "What does one traversal mean?"** Due to the perfect recall condition, all the infoset-action pairs $\mathcal{X}\times\mathcal{A}$ across different steps $h$ form a tree. This tree can be constructed by performing the search (e.g., DFS, BFS) over all the infoset-action pairs one time. We will include the above explanations in the revision of our paper. [1] Abe et al. Last-iterate convergence with full and noisy feedback in two-player zero-sum games. AISTATS, 23. [2] Abe et al. Adaptively Perturbed Mirror Descent for Learning in Games. ICML, 24. [3] Abe et al. Boosting Perturbed Gradient Ascent for Last-Iterate Convergence in Games. ICLR, 25. [4] Lattimore et al. Bandit Algorithms. 20. [5] Bai et al. Near-optimal learning of extensive-form games with imperfect information. ICML, 22. [6] Fiegel et al. Adapting to game trees in zero-sum imperfect information game. ICML, 23. --- Rebuttal Comment 1.1: Comment: First of all, I would like to thank the authors for the rebuttal comments. > The main difference is that our lower bound only applies to the class of algorithms with last-iterate convergence. Actually, their lower bounds are information-theoretic lower bound. In other words, their lower bound apply to any algorithms, regardless of last-iterate or average-iterate convergent algorithms. Since the lower bound in this paper coincides with theirs, the provided lower bound does not provide any new insight. > This tree can be constructed by performing the search I see. This is what I exactly imagined, but is it really easy? For example, the search in no-limit Texas hold’em would require a lot of computation, and doing one traversal seems infeasible (Table 4 of https://poker.cs.ualberta.ca/count_nl_infosets.html) Please feel free to let me know if I miss anything. --- Reply to Comment 1.1.1: Comment: We thank the reviewer's insightful feedback and prompt response! We provide our further responses below. **Q1. Lower Bound.** We concur that their lower bound coincides with ours. We will revise the parts of the presentation regarding the lower bound in our paper accordingly. **Q2. Search on the Game Tree.** We thank the reviewer for raising this crucial implementation aspect. Indeed, as the computation complexity of performing searching on the game tree scales as $O(XA)$, on game instances with extremely large infoset-action spaces, performing search on the game trees of such game instances is nearly infeasible in practice. Meanwhile, as the regret minimization lower bound and the sample complexity for learning in IIEFGs with bandit feedback scale with $\Omega(\sqrt{XAK})$ and $\Omega((X A+Y B) / \varepsilon^2)$, the statistical efficiency guarantees for learning on game instances with extremely large infoset-action spaces also seem vacuous, if no function approximation assumptions are further imposed. We will explicitly include this example in our paper and revise our original statement on the LHS of Line 346-349 accordingly.
Summary: The paper studies two-player zero-sum POMGs, proposes an negentropy-regularization-based algorithm, and establishes the last-iterate convergence. Though the rate seems quite loose, it compares favorably to the rate in a very relevant work Cai et al. [2023] with bandit feedback and in terms of last-iterate guarantees. A worst-case lower bound is also established. ## update after rebuttal As I have communicated to the authors, I do not think I missed any important contributions/weaknesses in my original assessment and will therefore keep my score unchanged. I support the acceptance of the paper. Claims And Evidence: The claims are credible and well supported by the mathematical results. Methods And Evaluation Criteria: Yes. The convergence is established on the NE gap, a standard measure of optimality. Theoretical Claims: I only checked the proof of the lower bound, but not the main theorem on the convergence rate of the proposed algorithm. However, I find the arguments reasonable, given the existing literature on the use of negentropy regularizer and Cai et al. [2023]. Experimental Designs Or Analyses: The experiments do not look meaningful. I do not think any conclusion can be drawn. The curves are very close to each other and well within the confidence interval/one standard deviation. Minor: The paper mentions simulation results in the abstract but does not include them in the main paper. Supplementary Material: I read the proof of Theorem 5.6 and the experimental results. Relation To Broader Scientific Literature: N/A. Essential References Not Discussed: I do not find any essential reference missing. Some papers that are worth noting due to their connection to the work in some aspects include: 1) Zeng et al. [2022] and Abe et al. [2024] seem to use regularization in a similar spirit. They add regularization to obtain last-iterate convergence, and decay the weight to ensure that eventually the original problem is solved. 2) Chen et al. [2023] studies two-player zero-sum games, establishes last-iterate convergence, and allows for uncoupled learning. 3) Abe et al. [2023], already referenced in the paper, needs more discussion, especially as Abe et al. [2023] also considers non-full-information feedback. Abe, K., Sakamoto, M., Ariu, K. and Iwasaki, A., 2024. Boosting Perturbed Gradient Ascent for Last-Iterate Convergence in Games. arXiv preprint arXiv:2410.02388. Chen, Z., Zhang, K., Mazumdar, E., Ozdaglar, A. and Wierman, A., 2023. A finite-sample analysis of payoff-based independent learning in zero-sum stochastic games. Advances in Neural Information Processing Systems, 36, pp.75826-75883. Zeng, S., Doan, T. and Romberg, J., 2022. Regularized gradient descent ascent for two-player zero-sum Markov games. Advances in Neural Information Processing Systems, 35, pp.34546-34558. Other Strengths And Weaknesses: Overall, I believe this work makes sufficient contributions to the literature to warrant acceptance. It is in general well-written (modulo the discussion on virtual transition, which I do not follow) and the technical results seem sound. On the negative side, the sub-optimality of the convergence rate is a concern, and the simulations do not show a clear advantage of the algorithm over the existing ones. Other Comments Or Suggestions: 1) The virtual transition is an important tool used in the paper. While the authors made an effort to explain this, the discussion is not clear enough for the audience to understand. The authors should consider explaining the virtual transition in a more intuitive manner, using fewer math expressions. 2) I do not agree with the discussion on the advantage of average-iterate convergence on line 055 and below. I understand that last-iterate convergence is usually preferred in the community, but I do not find the reason compelling. Averaged parameters can be easily updated online and requires minimal extra storage. You do not have to maintain the history of all past iterates. I also do not follow the comment that averaging policies is infeasible with non-linear function approximation. What prevents you from averaging the weights of a neural network across iterations? In addition, most existing convergence results are established in the tabular case anyways. Questions For Authors: No more questions. See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. Our response to each question is provided below. **Q1. "The paper mentions ... not include them in the main paper."** Thanks for pointing this out. We will be sure to explicitly include more descriptions of the experiments in the main paper body. **Q2. Additional References.** Thanks again for noting these additional references! We compare our work with these works as follows: * [1] establishes algorithms for solving two-player zero-sum MGs with $\widetilde{{O}}(k^{-1 / 3})$ last-iterate convergence. However, they study fully observable MGs while we study partially observable MGs. Further, they require full-information gradient feedback while we only require the bandit feedback. Note that computing the full-information gradient feedback even requires the knowledge of the state transitions (please see Sec. 2.2 of [1]). * For the comparisons with [2,3,4], please see our response to Q1 of reviewer T2uG. * Our work and [5] both study MGs with bandit feedback. However, we remark that they study fully observable MGs while we study partially observable MGs. Further, they require the assumption that the Markov chain can be irreducible and aperiodic for some policy. Therefore, we believe their algorithm is not directly applicable to our problem and our results are not directly comparable to theirs. Besides, our algorithm is OMD-based and can still guarantee a sublinear regret of $\widetilde{{O}}(k^{7 / 8})$ when the opponent is an adversary, while the policy in [5] is updated via constructing approximations of the best response to the opponent's policies and it is not clear whether their algorithm can guarantee a sublinear regret in the presence of a potentially adversarial opponent. We will include all the above discussions in our revised paper for better clarity. **Q3. Experiment Results.** We fully agree that on Lewis Signaling, the performance of our algorithm overlaps with that of the baseline algorithms. However, we believe the experiment results show that our algorithm performs relatively well across all game instances, and though there might be some baseline that performs similarly to our algorithm on some game instances, this baseline algorithm might not be able to converge fast on other game instances, as the last-iterate convergences of all the baseline algorithms are not theoretically guaranteed. Specifically, as mentioned in Appendix I, on the remaining game instances that are harder to learn (note that $X=3$, $A=3$ on Lewis Signaling, while $X=6$, $A=2$ on Kuhn Poker, $X=468$, $A=3$ on Leduc Poker and $X=12288$, $A=13$ on Liars Dice), only baseline BalancedFTRL converges as fast as our algorithm on Leduc Poker. However, our algorithm converges faster than BalancedFTRL on Liars Dice, and there is clearly a large gap between our algorithm and it on Kuhn Poker during episode $10^5$ to $10^7$. The other baseline with notable performance is BalancedOMD. Nevertheless, we also would like to note that, though BalancedOMD tends to have a similar NE gap at the very end on Kuhn Poker and Leduc Poker, our algorithm converges relatively faster than BalancedOMD up to episode $10^5$ on Kuhn Poker and $10^7$ on Leduc Poker. More importantly, the performance of BalancedOMD is the worst among all the baselines on the hardest game instance Liars Dice, and our algorithm converges clearly faster than BalancedOMD on this instance. **Q4. "The virtual transition is an important tool used in the paper ..., using fewer math expressions."** Please see our response to Q2 of reviewer T2uG. **Q5. "I do not agree with the discussion on ..."** We agree that if linear function approximation is leveraged, the average policy can be obtained by applying a moving average over the parameters online. Our original statement on the LHS of Line 55-57 might not be appropriate, as we now realize, and we will revise it accordingly. When using nonlinear function approximation, averaging the parameters of nonlinear function approximation is of course possible, but our original statement on the LHS of Line 61-63 intended to indicate that simply averaging the parameters of nonlinear function approximation might not be able to obtain the average policy. To see this, let $f(x)=x^2$, $x_1=1$ and $x_2=5$. Then $f(\frac{x_1+x_2}{2})=9\ne \frac{f(x_1)+f(x_2)}{2}=13$. We will also revise this part for better clarity. [1] Zeng et al. Regularized Gradient Descent Ascent for Two-Player Zero-Sum Markov Games. NeurIPS, 20. [2] Abe et al. Last-iterate convergence with full and noisy feedback in two-player zero-sum games. AISTATS, 23. [3] Abe et al. Adaptively Perturbed Mirror Descent for Learning in Games. ICML, 24. [4] Abe et al. Boosting Perturbed Gradient Ascent for Last-Iterate Convergence in Games. ICLR, 25. [5] Chen et al. A finite-sample analysis of payoff-based independent learning in zero-sum stochastic games. NeurIPS, 23. [6] Lattimore et al. Bandit Algorithms. 20. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for the response. I do not think I missed any important contributions/weaknesses in my original assessment and will therefore keep my score unchanged. I support the acceptance of the paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the prompt response and the support of acceptance of our work!
null
null
null
null
null
null
null
null
UncertainSAM: Fast and Efficient Uncertainty Quantification of the Segment Anything Model
Accept (poster)
Summary: This work introduces a method for uncertainty quantification of SAM, based on Bayesian entropy formulation. A lightweight post-hoc UQ method is trained based on the formulation. Results on multiple public benchmarks demonstrate the effectiveness of the proposed method. ## update after rebuttal I am glad to keep my original positive rating. Claims And Evidence: Claims are clear and convincing. Methods And Evaluation Criteria: Evaluation criteria make sense. Theoretical Claims: Theoretical claims seem correct to me. Experimental Designs Or Analyses: The experimental designs and analyses are reasonable to me. Supplementary Material: I have checked the quantitative and qualitative results in the supplementary materials. Relation To Broader Scientific Literature: This uncertainty quantification method of SAM can benefit a wide range of computer vision tasks. Considering the wide application of SAM, this method will be influential. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: The experiments on multiple public benchmarks validate the effectiveness of the proposed method comprehensively. Other Comments Or Suggestions: N/A. Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and positive assessment. To further support the reviewer’s inclination toward acceptance, we would like to highlight the linked code repository in the main paper, which demonstrates the ease of use of our proposed framework. We remain open to any additional questions, remarks, or suggestions to refine our proposed paper during the author-reviewer discussion.
Summary: This paper introduces an interesting method to measure the uncertainty of SAM in image segmentation tasks. To achieve this, this paper proposes USAM, an efficient post-hoc method that can quantify the uncertainty of SAM and help users determine whether the model results are reliable, which tunes a lightweight MLP-based estimator. Specifically, the authors design a formula based on Bayesian entropy that can simultaneously consider three levels of uncertainty: model uncertainty (for example, the model is too small, resulting in inaccurate predictions), prompt uncertainty like ambiguous user prompts, and task uncertainty like unclear user prompts. The paper demonstrates the effectiveness of USAM on several datasets, showing its ability to identify sources of uncertainty and improve model predictions. Overall, this paper is well-organized and theoretically grounded. Claims And Evidence: The main claims about uncertainty quantification in SAM are supported by clear mathematical formulation, method, proofs, and experiments. The minor concern is the claim that "the ambiguous nature of the class-agnostic foundation model SAM challenges current uncertainty quantification approaches." is not well justified. As SAM can generate ambiguous predictions (Top-K), it inherently has uncertainty awareness, making the uncertainty analysis easier. Why do the authors claim that this will challenge UQ? Methods And Evaluation Criteria: The proposed method introduces Bayesian Entropy Approximation in SAM, which makes sense and is technically sound. It considers uncertainty from different levels (Eq.2), which is well justified by both theoretical proofs (Sec. 3) and experiments (Sec.4.2 to Sec. 4.4). Theoretical Claims: I have checked the theoretical claims about the uncertainty quantification formulation (Sec. 2.2), the grounding in SAM (Sec.2.3), and Bayesian Entropy Approximation (Sec.3 and Appendix A). I did not find major theoretical errors and think that this paper is well organized. Experimental Designs Or Analyses: Most of the experiments were well conducted, especially for the detailed justification of claimed uncertainty terms from the model, prompt, and task aspects. The experimental analysis is also comprehensive and convincing. However, there are two major concerns. The first concern is insufficiently comparing existing UQ methods in SAM model, such as [1-3]. The second concern is the lack of comparison of specific settings focusing on uncertain predictions, like ambiguous segmentation, which is critical to highlight the strength of the proposed UQ methods [1,2]. [1] Flaws can be Applause: Unleashing Potential of Segmenting Ambiguous Objects in SAM (NeurIPS 24) [2] Uncertainty-aware Fine-tuning of Segmentation Foundation Models (NeurIPS 24) Supplementary Material: I have checked the mathematical proof of Bayesian entropy approximation, and more quantitative and qualitative results. Relation To Broader Scientific Literature: The proposed method has a potentially broader impact on understanding the existing SAM model from the uncertainty perspective, thereby inspiring the algorithmic designs of the next-generation SAM. Essential References Not Discussed: The proposed method explores the uncertainty quantification of the SAM model. However, this is not a new direction, and there are many related works. Even though the proposed method is comprehensive, technically sound, and theoretically correct, it is suggested to make necessary discussion with related UQ methods using SAM [1-2] [1] Flaws can be Applause: Unleashing Potential of Segmenting Ambiguous Objects in SAM (NeurIPS 24) [2] Uncertainty-aware Fine-tuning of Segmentation Foundation Models (NeurIPS 24) Other Strengths And Weaknesses: Strength: - The paper is well organized and gives a comprehensive and theoretically sound analysis of the uncertainty in SAM. - The proposed Bayesian approximation is reasonable. - The perception of the model, prompt, and task uncertainty is interesting and comprehensive. - The experimental justification is solid. Weakness: - Lack some comparison with related works about modelling uncertainty in SAM - Lack some clarification (see Questions For Authors) Other Comments Or Suggestions: N/ Questions For Authors: - The authors claim that "For several tasks, quantifying the uncertainty of SAM is of particular interest.". It is suggested to clarify what tasks require measuring uncertainty in SAM. - The authors are suggested to give a justification for the efficiency of the proposed method. - In Sec. 4, the authors implement the method in SAM2, but Fig.1 illustrates SAM, which is inconsistent. - Considering that the authors use SAM 2 as the base model, the theoretical analysis lacks a formulation for time-series signals. Will it introduce extra uncertainty when considering the object motions in videos? - Lack of a comparison on the ambiguous segmentation benchmark. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We acknowledge the in-depth review that recognizes our strengths and provides valuable directions for addressing flaws, particularly regarding related methods. Below, we address questions and concerns. Issues already covered in other responses are only referenced. --- ## Claims Thank you for pointing out that this claim in the abstract is not sufficiently elaborated in the introduction. A more detailed discussion of SAM and its implications for UQ is provided in Section 2.3, "UQ in SAM". Most UQ methods are based on MC strategies and motivated by Bayesian principles (see Equation 1 ff.). While these methods work well in practice, their theoretical justifications rely on assumptions that are often unrealistic and cannot be guaranteed. For example, test-time augmentation (Wang et al., 2019), used to estimate aleatoric uncertainty, assumes that the sample is in-distribution. Similarly, epistemic UQ estimators, such as model ensembles, assume that uncertain samples are out-of-distribution. These issues, which already exist in well-established UQ approaches, are further exacerbated by the task-agnostic nature of SAM. To the best of our knowledge, there is no general discussion in the literature on SAM's implications on UQ. However, we acknowledge that our formulation of this claim is unclear. To address this, we suggest adding the following to line 126 (right column): "While the presented methods work well in practice, the theoretical foundations for separating aleatoric and epistemic uncertainty remain debatable and under active discussion (Gruber et al., 2023). The task-agnostic nature of SAM further amplifies this challenge, as discussed next. More..." ## Weakness 1 As the reviewer mentioned, several papers propose UQ methods, such as [1] and [2]. The first encodes variance into the latent to sample ambiguous segmentations, while [2] estimates pixel-wise epistemic uncertainty to improve finetuning. They are designed for specific tasks and not directly comparable to us. USAM neither samples mask nor estimates pixel-wise uncertainty, making it unsuitable for both. However, our UQ comparison already aligns with the comparisons performed in [1] and [2]. In [1], UQ performance is evaluated by applying prompt perturbations (similar to our H_XP) to SegGPT and image augmentations (similar to our H_Y) to SEEM. Note that by changing the frameworks to SegGPT and SEEM, the ambiguous segmentation task is evaluated but UQ is not evaluated in isolation. In [2], a pixel-wise equivalent to our "Entropy" is used, termed SUM Confidence-Uncer, alongside a pixel-wise equivalent to our H_A, termed SUM Discrepancy-Uncer. These study designs underline that standard UQ methods are suitable for evaluating new UQ approaches. Nevertheless, we acknowledge that our paper does not sufficiently address the variety of UQ approaches used in conjunction with SAM. We propose to expand Section 2.3 ("UQ in SAM") and include a discussion of more related works that propose or employ UQ with SAM, such as [1] and [2]. --- ## Q1 "For several tasks, quantifying the uncertainty of SAM is of particular interest". We mention some tasks in the proposed paper: For example, Vujasinovic et al. (2024) use UQ to estimate if supervision is required to correct tracking errors, or in the healthcare domain to gain trust and improve prediction (Deng et al., 2023; Zhang et al., 2023). The papers [1] and [2] mentioned by the reviewer employ UQ in SAM to sample prediction or to improve finetuning. We will add these additional references to the camera ready. --- ## Q2 A similar concern was raised by reviewer **nnaG**. We agree that the simplicity of our MLP does not sufficiently support claims of superior efficiency. To address this, we conducted runtime experiments demonstrating that USAM incurs minimal computational overhead and is significantly faster than MC-based methods. For details, see our response to reviewer **nnaG**, Weakness 3, or run the experiment in our anonymous repository https://anonymous.4open.science/r/UncertainSAM-C5BF/scripts/speed_test.ipynb. We propose adding these findings to the camera-ready version. --- ## Q3 and Q4 Our submitted paper contains ambiguities regarding our use of SAM vs. SAM2. For a detailed clarification, please see our introduction of the response to reviewer **6x8E**. --- ## Q5 We are uncertain how our method could be applied to the ambiguous segmentation benchmark. As stated by [X], the goal of this benchmark is to achieve "better agreement between prediction and the ground truth". In contrast, our method estimates the variance of the distribution (i.e., uncertainty). Since we do not modify SAM’s mask prediction, USAM’s segmentations would be identical to vanilla SAM’s. [X] Rahman, Aimon, et al. "Ambiguous medical image segmentation using diffusion models." CVPR. 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal, which addresses my concerns. Hence, I am happy to keep my score.
Summary: This paper makes a series of efforts to enable uncertainty quantification for SAM. To achieve this, the authors first adopt Monte Carlo sampling to estimate the predictive, epistemic model, aleatoric prompt, and aleatoric task uncertainty. Then, to release the computation burden of the sampling process during testing, the authors concatenate SAMs 256-dimensional mask and IoU tokens and feed them into MLPs that are trained to predict uncertainty. Claims And Evidence: The claims are well-supported by the experiments. Methods And Evaluation Criteria: The proposed method is reasonable. Theoretical Claims: There are no new theoretical proofs. Experimental Designs Or Analyses: The experimental designs are reasonable with no major issues. One minor issue is the lack of details on how to combine SAM and SAM2 for the evaluation. Supplementary Material: I review visualization results for samples with uncertainty estimations. Relation To Broader Scientific Literature: This work extends the existing Uncertainty Quantification into SAMs. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths 1. Uncertainty quantification is important for segmentation tasks. 2. The decomposition of aleatoric uncertainty into prompt and task uncertainty is reasonable. 3. The proposed method is easy to follow. Weaknesses 1. Lacks the analysis of why the mask and IoU tokens can reflect the uncertainty. Since the uncertainty depends on the ground truth mask, the mask and IoU tokens can not reflect uncertainty entirely. 2. Why choose MLPs and how to confirm their specific parameters e.g., 3 Layers with 512 hidden states and a Sigmoid activation. Other Comments Or Suggestions: No other comments. Questions For Authors: Since the uncertainty depends on the ground truth mask, the mask and IoU tokens can not reflect uncertainty entirely. Thus, it remains questionable that the usage of only the mask and IoU tokens can reflect the uncertainty. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank the reviewer for highlighting our strenghts and asking questions that are valuable for eliminating weaknesses and enhancing our paper in terms of readability and evaluation. --- Before addressing the questions, we need to clarify an issue that affects this review as well as reviewer **3EoJ**. After carefully re-reading the paper with temporal distance, we acknowledge that our description of the usage of SAM2 is misleading and needs to be improved: - First, line 254 (right column), "We combine the evaluation of SAM and SAM2," is misleading because we do not combine the models during evaluation. Rather, we combine the datasets evaluated in the corresponding papers. We do this to provide a broader overview that also includes the latest data publication from SAM2. - Second, line 242 (right column), "We use the pretrained SAM2," suggests that we use the video segmenation mechanism of SAM2, which is designed for segmenting temporally consistent masks in videos. In reality, while we use the pretrained SAM2 model, we only use its single-image implementation, which is equivalent to SAM and does not include the video extension. We do this because the authors of SAM2 provide more stable code, better interfaces, and slightly better accuracy. Potentially, our MLPs could be used ad hoc to estimate uncertainty from the refined tokens of SAM2 applied to videos, but that is not evaluated and is beyond the scope of this paper. To prevent misunderstandings for future readers, we will avoid the term SAM2 and suggest the following modifications in the camera-ready version: **Replace:** "In our experiments, we use the pretrained SAM2 (Ravi et al., 2024)" **with** "In our experiments, we use SAM models pretrained by Ravi et al. (2024)." **Replace:** "We combine the evaluation of SAM and SAM2 ..." **with** "Combining the dataset selection of Kirillov et al. and Ravi et al., we ..." --- ## Weakness and Question 1: Why the mask and IoU tokens can reflect uncertainty (Un)certainty refers to the likelihood that the prediction is correct. In our case, correctness is measured by the IoU metric, making the expected IoU an ideal proxy for uncertainty. In general, estimating uncertainty does not require ground truth. For example, predictive uncertainty is often described as the variance of the predictive distribution. As seen in Equations 1 and 2, this distribution does not depend on the ground truth. Therefore, the absence of ground truth during inference does not invalidate the expressiveness of our method that uses tokens for uncertainty estimation. Additionally, the SamScore in SAM (Kirillov et al.) is based on the same principle and also does not have access to ground truth during inference. However, the question of whether and how the mask and IoU tokens reflect uncertainty is an interesting one: Essentially, the MLPs attempt to map patterns from the object’s latent space (i.e., the tokens) to the expected IoU gap (i.e., uncertainty) based on statistical evidence. The already given effectiveness of the SamScore suggests that such patterns exist in general. To further investigate and provide more details for future readers, we conducted an additional ablation study. In this ablation, we set either the IoU token or the mask token to 0 to remove potentially informative patterns. We evaluated using the tiny model across the three tasks described in the submitted paper on the COCO dataset. The results indicate that both tokens contribute to the predictive capability, yet they remain accurate as standalone features. Both in combination lead to the best results. The results are: **Token to Uncertainty Ablation** **in relative AUC in %** | **Mask Token** | **IoU Token** | **Model UQ** | **Prompt UQ** | **Task UQ** | |:---:|:---:|:---:|:---:|:---:| | No | Yes | 61.19 | 72.08 | 86.89 | | Yes | No | 62.63 | 76.42 | 91.96 | | Yes | Yes | **63.66** | **78.30** | **94.82** | We will add this interesting findings to the camera ready version. ## Weakness 2: The MLPs' design choice The reviewer’s remark about our MLP design choice is similar to the comment from reviewer **nnaG**, indicating that our current explanation lacks clarity. There was no dedicated architecture design process for the UQ estimator. Instead, the architecture is motivated by SAM’s original MLP design for predicting the SamScore. We differ only in the input dimension and hidden states (increasing from 256 to 512) because we use and concatenate both tokens instead of using just one. The sigmoid activation function is a reasonable choice since the expected IoU gap ranges between 0 and 1. To improve clarity, we suggest modifying the method description and adding a figure to illustrate the MLP architecture. Please see our response to reviewer **nnaG** for details on the suggested changes.
Summary: The paper introduces UncertainSAM (USAM), a method for uncertainty quantification (UQ) in the Segment Anything Model (SAM). By decomposing uncertainty into epistemic (model), aleatoric (prompt/task), and task ambiguity components, USAM employs a Bayesian entropy framework and lightweight MLPs to estimate uncertainty efficiently. Experimental results across multiple datasets demonstrate USAM's superiority over SAM's inherent confidence score and standard UQ baselines, offering a practical solution for applications requiring reliability and efficiency. Claims And Evidence: good Methods And Evaluation Criteria: good Theoretical Claims: good Experimental Designs Or Analyses: good Supplementary Material: good Relation To Broader Scientific Literature: Please see weakness Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. It introduces a novel decomposition of uncertainty into epistemic, prompt, and task sources, addressing SAM's unique class-agnostic nature. This provides a nuanced framework for understanding SAM's limitations. WDSI effectively addresses the challenge of progressive occlusion, where parts of objects become invisible due to overlapping or changes in camera viewpoint. By leveraging hypergraph representations and the WSW metric, the framework ensures reliable tracking and segmentation even in occlusion-heavy scenes. 2. USAM’s lightweight MLPs achieve state-of-the-art performance while avoiding the computational overhead of Bayesian methods. Extensive experiments on five datasets (DAVIS, ADE20k, MOSE, COCO, SA-V) validate USAM’s effectiveness in tasks like model selection, prompt refinement, and task supervision. Weaknesses 1. While USAM outperforms SAM’s confidence score and entropy-based methods, it does not compare against recent SAM-specific UQ approaches (e.g., prompt-augmented methods), leaving its relative novelty unclear. And, though USAM is claimed to be efficient, no runtime or memory benchmarks are provided to quantify gains over Bayesian sampling or other UQ methods. This information would help researchers understand the practical feasibility of implementing USAM. 2. The MLP architecture and SMAC3 optimization setup are underdescribed. The author should provide more structural diagrams to clarify. 3. As a lightweight method, the authors should add parameter and speed comparison. Other Comments Or Suggestions: none Questions For Authors: please see weakness Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks to the reviewer for the detailed feedback and comprehensible remarks that will help enhance the paper. Our responses to the proposed improvements are as follows: --- ## Weakness 1 We only partially agree with the statement that we do not compare to SAM-specific UQ. For example, our Prompt UQ is closely related to Deng et al. (2023, see main paper for reference). Furthermore, SAM-specific UQ is usually application-specific and not readily translatable to our holistic view of UQ. However, this weakness will be addressed with a modification of the paper, which is further elaborated in our response to reviewer 3EoJ, who identified a similar concern. Please see response to reviewer 3EoJ for the details. --- ## Weakness 2 We agree that the **architecture** is underdescribed. It is motivated by SAM's original MLP design for predicting the SamScore. We differ only in the input dimension and hidden state (from 256 to 512) because we employ and concatenate both tokens. We suggest the following modifications to clarify this in the paper: - Line 237: "The simple design of the MLPs aligns with the architecture of SAM’s inherent MLP for predicting the SamScore, as described in the Appendix of Kirillov et al. (2023). They have three layers and a sigmoid activation, but we use 512 input dimensions and hidden states due to the concatenated input tokens. The architecture is visualized in Figure XX." - In addition, we will include a figure presenting an MLP with the input tokens. Regarding **SMAC3:** Due to space limitations, we presented the upper and lower hyperparameter bounds optimized using SMAC3 in ll. 247–252 (right column). We suggest adding a more detailed description and presenting the per-iteration results in tabular form. Since the optimization is not the primary focus of the paper, we will include this in the Appendix and refer to it in line 252. The SMAC3 results will provide greater transparency regarding the robustness of the proposed method. --- ## Weakness 3 Our proposed MLPs are very small compared to the entire SAM model and require only a single forward pass, unlike MC-based methods. However, as mentioned by the reviewer, an explicit performance comparison should be included in the final version of the paper to demonstrate the impact of our method on efficiency. To address this, we designed hardware-dependent runtime experiments and added them to our anonymous code repository at https://anonymous.4open.science/r/UncertainSAM-C5BF/scripts/speed_test.ipynb for reproducibility. To illustrate the practical efficiency impact, we conducted tests on a small consumer GPU (GeForce RTX 3050) using setups described in the paper. We compare us to vanilla SAM, entropy, and MC sampling (image and prompt) as employed in our paper. Notably, we performed the test with all nine (!) MLPs presented in our paper. Thus, the efficiency of our method could be increased by up to a factor of nine if only specific MLPs are necessary for a given application. It shows that our method is superior to all other extended UQ methods. We thank the reviewer for this suggestion and propose adding the following results to the camera ready: **Large Model** | Config | Time (s/iter) | Params | |--------------------------|--------------|----------------| | SAM2 | 0.43796 | 224.4M | | SAM2 + Entropy | 0.45278 | 224.4M | | SAM2 + 5 MC img aug | 2.18681 | 224.4M | | SAM2 + 8 MC prompt aug | 0.50030 | 224.4M | | SAM2 + 9 MLPs **(USAM)** | **0.44144** | 224.4M + 9×526K | **Base+ Model** | Config | Time (s/iter) | Params | |--------------------------|--------------|----------------| | SAM2 | 0.20538 | 80.8M | | SAM2 + Entropy | 0.23294 | 80.8M | | SAM2 + 5 MC img aug | 1.02753 | 80.8M | | SAM2 + 8 MC prompt aug | 0.28902 | 80.8M | | SAM2 + 9 MLPs **(USAM)** | **0.20966** | 80.8M + 9×526K | **Small Model** | Config | Time (s/iter) | Params | |--------------------------|--------------|----------------| | SAM2 | 0.13444 | 46.0M | | SAM2 + Entropy | 0.15716 | 46.0M | | SAM2 + 5 MC img aug | 0.68734 | 46.0M | | SAM2 + 8 MC prompt aug | 0.23210 | 46.0M | | SAM2 + 9 MLPs **(USAM)** | **0.14160** | 46.0M + 9×526K | **Tiny Model** | Config | Time (s/iter) | Params | |--------------------------|--------------|----------------| | SAM2 | 0.12166 | 38.9M | | SAM2 + Entropy | 0.14876 | 38.9M | | SAM2 + 5 MC img aug | 0.58362 | 38.9M | | SAM2 + 8 MC prompt aug | 0.19736 | 38.9M | | SAM2 + 9 MLPs **(USAM)** | **0.13899** | 38.9M + 9×526K |
null
null
null
null
null
null
Smoothed Preference Optimization via ReNoise Inversion for Aligning Diffusion Models with Varied Human Preferences
Accept (poster)
Summary: This paper proposes SmPO-Diffusion, a novel method for aligning text-to-image diffusion models with varied human preferences. The authors introduce two core contributions: (1) a smoothed preference modeling approach, replacing the binary preference distribution with a smooth distribution derived by reward models. (2) an optimization strategy called ReNoise Inversion, designed to estimate the dpo preference distribution by inversion and renoise. Experimentally, the method achieves state-of-the-art results, surpassing existing baselines across multiple human preference evaluation metrics, and notably reduces training resource consumption. Claims And Evidence: Yes. The claims in the submission are supported by convincing evidence. The paper provides extensive experiments and ablation studies to support the claims. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I've checked the correctness of any proofs for theoretical claims. The author provided the derivation of eq.9 in the appendix and it is correct. Experimental Designs Or Analyses: Yes. 1. Primarily used the Pick-a-Pic v2 dataset and benchmarked against established baselines (e.g., Diffusion-DPO, MaPO, KTO, SFT), which is appropriate and valid. 2. Evaluated results using widely-accepted automatic metrics (e.g., PickScore, HPSv2.1), effectively capturing text-to-image quality. 3. Conducted detailed ablation studies clearly demonstrating the contributions of each method component; analyses were very convincing. 4. Included practical and valid computational efficiency analyses. Supplementary Material: Yes. I've reviewed the supplementary material. Mainly the additional qualitative results as well as the user study. The results continue to be convincing. Relation To Broader Scientific Literature: Yes. 1. The paper builds upon Direct Preference Optimization (DPO) techniques, extending recent methods such as Diffusion-DPO and MaPO, and addresses known issues like excessive optimization and misalignment. 2. The work utilizes DDIM and ReNoise inversion techniques from prior literature but uniquely applies them to preference optimization, demonstrating improved alignment and computational efficiency compared to earlier methods. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The proposed smoothed preference distribution appears novel and well-motivated. Incorporating DDIM inversion into DPO training is innovative, providing deeper insights into preference modeling and demonstrating notable originality. 2. The experimental results are convincing and impressive, clearly highlighting the significant potential of the proposed method. 3. The paper is clearly written and well-structured. The presented method achieves a good balance between simplicity and effectiveness. Weaknesses: 1. While using automated reward models is practically advantageous, including explicit human validation results in the main text would considerably strengthen the claim that the proposed method enhances real-world alignment. Relying solely on reward models to capture fine-grained preference distributions appears overly simplistic, raising questions about why this method can outperform large-scale human preference datasets. 2. A key contribution of this paper is adapting DDIM inversion for sampling and noise injection within DPO training. However, additional theoretical justification is required. Specifically, it remains unclear why DDIM inversion is more effective than simple noise addition during training. Moreover, discussing whether this approach generalizes beyond DPO to other training paradigms would enhance the paper's impact. Other Comments Or Suggestions: Please provide a clearer introduction and context for the compared methods, such as MaPO and Diffusion-KTO. Currently, these comparisons appear abruptly without prior explanation, making it confusing for readers. Clarifying why these particular methods were selected for comparison and briefly outlining their relevance to your work would significantly enhance readability and coherence. Questions For Authors: 1. The idea of using DDIM inversion rather than simply adding noise during training is intriguing. Could this technique be generalized and applied effectively to other diffusion model training paradigms beyond DPO? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for highly recognizing the value of our study and helpful feedback! --- **Q1:** *Why do reward models outperform large human preference datasets despite their simplicity?* **A1:** This is a great question! We believe reward models have the following advantages: 1、We note that PickScore (reward model) feedback is interpretable as **pseudo-labeling** the Pick-a-Pic dataset—a form of data cleaning [1] [2]. Specifically, the PickScore reward model serves not only as a proxy for human preferences but also as a mechanism for **data refinement**. 2、**Granular Feedback vs. Binary Human Labels** Human preference datasets inherently provide *binary* labels (e.g., "A > B"), limiting their ability to represent nuanced preference distributions. In contrast, reward models assign **continuous scores**, enabling two key advantages: + **Smooth Supervision**: Loss functions can adaptively weight samples based on reward differences,avoiding the brittleness of binary thresholds. + **Relative Calibration**: Fine-grained rewards better capture perceptual similarity (e.g., distinguishing "slightly better" from "significantly better" cases), which is critical for guiding iterative model updates. This explains why reward-guided training can outperform raw human datasets: it amplifies signal-to-noise ratios in human preferences while retaining their semantic intent. We emphasize that our approach complements (not replaces) human feedback, as reward models are trained on human-annotated data. [1] Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet. CVPR 2020 [2] Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin Dogus Cubuk, and Quoc Le. Rethinking pretraining and self-training. NeurIPS 2020 --- **Q2:** *The paper uses DDIM inversion in DPO training but lacks theoretical justification of its advantage over basic noise addition.* **A2:** Thank you for the valuable feedback! We first derive the minimal proof. Note that for Equ (5), Diffusion-DPO utilizes $q(x_{1:T}|x_{0})$ to approximate $p_{\theta}(x_{1:T}|x_{0})$. For each step, Diffusion-DPO used $q(x_{t-1,t}|x_{0})$ to approximate $p_{\theta}(x_{t-1,t}|x_{0})$. Suppose we replace standard noise injection at $x_{t-1}$ with **a single-step DDIM inversion**, when given $x_{0}$. Formally, we propose to use $q(x_{t-1}|x_{0})p_{\theta}(x_{t}|x_{t-1})$ for approximation. And this approximation yields lower error because $$D_{KL}(q(x_{t-1}|x_{0})p_{\theta}(x_{t}|x_{t-1})||p_{\theta}(x_{t-1,t}|x_{0})) = D_{KL}(q(x_{t-1}|x_{0})||p_{\theta}(x_{t-1}|x_{0})) <D_{KL}(q(x_{t-1,t}|x_{0})||p_{\theta}(x_{t-1,t}|x_{0})).$$ The complete derivation employing full inversion will be detailed in the paper. --- **Q3:** *Could DDIM inversion be generalized and applied as an standard alternative to noise injection in diffusion training?* **A3:** Yes, generalizable! We greatly appreciate the opportunity to discuss this point! Currently, the training of large-scale diffusion models/flow matching primarily consists of two stages: **pre-training** and **post-training**. The pre-training stage focuses on establishing the trajectory from a noise distribution to a data distribution, while we argue that the post-training stage should specifically fine-tune variables that are **highly correlated with the target images**. This idea shares some similarities with the **Reflow** method in **Rectified Flow** [3], which trains specialized (noise, image) pairs. However, a key distinction is that most real-world datasets contain only images (without paired noise). Thus, our motivation is to **adaptively adjust image-dependent variables** during post-training. We believe this approach could become a dominant paradigm in future post-training strategies, as it **preserves the model’s inherited capabilities while enhancing its task-specific performance**. [3] Liu, Xingchao, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. ICLR2023 --- **Q4:** *Please provide a clearer introduction and context for the compared baselines.* **A4:** Thank you for the suggestion! We fully agree that providing clearer motivation for method selection is critical for readability. Below, we outline our planned revisions to address this concern: These baselines share a common focus on **human-aligned image generation** but **differ fundamentally in their technical mechanisms**: Diffusion-KTO adopts the Kahneman-Tversky [4] model to represent human utility instead of maximizing the log-likelihood of preferences. Margin-aware Preference Optimization (MaPO) jointly maximizes the likelihood margin between the preferred and dispreferred datasets and the likelihood of the preferred sets, simultaneously learning general stylistic features and preference without reference model. [4] Tversky, A. and Kahneman, D. Advances in prospect theory: Cumulative representation of uncertainty.
Summary: This paper proposes a smoothed extension to DPO, where the preference data is smoothed to incorporate non-binary preference labels. The authors first created smoothed preference labels for image pairs using the likelihood estimation of a reward model. Then it uses noise-inversion to provide a better posterior estimation of the forward process during the optimization step. The author conducted experiments on a variety of models (SDXL, SD1.5) and showed that the proposed method outperforms existing baselines. Claims And Evidence: The author made two central claim: a) the existing framework based on binary feedbacks do not adequately address noise and inconsistencies in the human preference data, necessitates the incorporation of a smoothed relaxation. b) the proposed noise inversion technique addresses the improves training process of preference alignment algorithm by providing better estimates of forward sampling trajectory. The author was able to support both of these claims with theoretical analysis and emprical experiments. Methods And Evaluation Criteria: The author employed common evaluation metrics (e.g. PickScore, HPSv2) and datasets (e.g. Parti-Prompts) that are widely used by related literature. The author also included relevant baselines such as Diffusion-DPO, Diffusion-KTO. The authors additionally conducted experiments on multiple models (SD1.5, and SDXL) to showcase the generalizability of the proposed method. These results are comprehensive. Theoretical Claims: I checked derivation in the main paper and appendix, they look good to me. Experimental Designs Or Analyses: See Methods And Evaluation Criteria section. The author employs a set of common setup for their experiments. Supplementary Material: I reviewed all the appendix. Relation To Broader Scientific Literature: The proposed work proposed a novel solution to a problem that is well-recognized by the community (i.e. noises and inconsistencies of working with human preference data). This work provide meaningful insight to the problem and can inspire future solutions. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. The authors are suggested to discuss the statistic significance of the gap in table 7, 8 and table 9. These numerical metrics can be hard to interpret for people with limited exposure to prior literature, as they have drastically different scales and some gap may be perceived as "insignificant". Other Comments Or Suggestions: Table 7 caption can be improved, as it is not immediately clear if rows are reward model and columns are evaluation metric, or vice versa. It was only clearly after noticing the uparrow in column titles. I suggest the authors clearly state in the caption that rows represents different training signals while columns are evaluated metrics. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We're truly grateful for your enthusiastic reception of our manuscript and your insightful feedback! --- **Q1:** *Table 7 caption can be improved.* **A1:** We sincerely appreciate your constructive feedback! We have implemented the following improvements: + **Caption Revision**: We have revised the table caption to explicitly state: **"Table 7: Impact of weight-to-sensitivity ratio using different reward models. We report the performance comparison where rows represent different training signals and columns denote evaluation metrics. Upward arrows (↑) in column headers indicate higher values are preferable."** + **Visual Reinforcement**: To further enhance clarity: We will include a brief structural description in the Results section (Section 5.4) when first referencing the table in "Choice of Reward Model" subsection. --- **Q2**:*Discuss the statistic significance of the gap in table 7, 8, 9.* **A2:** Thank you for this helpful suggestion! To provide a comprehensive statistical evaluation of our approach, we present both **win-rate analysis** and **effect size measurements**. + The win-rate results in *Supp Table 1-2* demonstrate how frequently evaluators prefer SmPO generations over baseline methods, with values exceeding **50**% (indicating statistical majority preference) highlighted in bold. Supp Table 1: Win-rate comparison between SmPO-SDXL and baselines on Pick-a-Pic v2 test set. | | PickScore | HPSv2.1 | ImageReward | Aesthetic | CLIP | | ------------- | --------- | -------- | ----------- | --------- | -------- | | vs. SDXL | **89.6** | **93.6** | **81.5** | **65.5** | **60.6** | | vs. SFT-SDXL | **96.6** | **94.2** | **81.3** | **76.3** | **67.3** | | vs. DPO-SDXL | **75.7** | **85.9** | **71.9** | **63.9** | **55.4** | | vs. MaPO-SDXL | **83.5** | **86.5** | **61.0** | 47.4 | **60.2** | Supp Table 2: Win-rate comparison between SmPO-SD1.5 and baselines on Pick-a-Pic v2 test set. | | PickScore | HPSv2.1 | ImageReward | Aesthetic | CLIP | | ------------- | --------- | -------- | ----------- | --------- | -------- | | vs. SD1.5 | **82.5** | **88.0** | **80.1** | **79.3** | **66.5** | | vs. SFT-SD1.5 | **68.5** | **66.1** | **62.2** | **59.0** | **60.0** | | vs. DPO-SD1.5 | **72.3** | **83.7** | **76.1** | **69.7** | **63.3** | | vs. KTO-SD1.5 | **68.3** | **67.3** | **57.0** | **64.7** | **59.8** | + Additionally, we compute **Cohen's d** to quantify the **effect sizes** between SmPO and baselines in *Supp Table 3-6*, following conventional interpretations: |d|<0.2 (small), 0.2≤|d|<0.5 (medium), 0.5≤|d|<0.8 (large), and |d|≥0.8 (very large). **Cohen's d measures the standardized mean difference in standard deviation units, making it unit-free.** Supp Table 3: Cohen's d for comparison between SmPO-SDXL and baselines on Pick-a-Pic v2 test set. | | PickScore | HPSv2.1 | ImageReward | Aesthetic | CLIP | | ------------- | --------- | ------- | ----------- | --------- | ----- | | vs. SDXL | 0.600 | 0.860 | 0.537 | 0.205 | 0.203 | | vs. SFT-SDXL | 0.937 | 0.960 | 0.693 | 0.480 | 0.266 | | vs. DPO-SDXL | 0.282 | 0.505 | 0.290 | 0.192 | 0.010 | | vs. MaPO-SDXL | 0.508 | 0.648 | 0.176 | -0.120 | 0.213 | Supp Table 4: Cohen's d for comparison between SmPO-SD1.5 and baselines on Pick-a-Pic v2 test set. | | PickScore | HPSv2.1 | ImageReward | Aesthetic | CLIP | | ------------- | --------- | ------- | ----------- | --------- | ----- | | vs. SD1.5 | 0.650 | 1.010 | 0.730 | 0.652 | 0.385 | | vs. SFT-SD1.5 | 0.227 | 0.188 | 0.157 | 0.136 | 0.139 | | vs. DPO-SD1.5 | 0.368 | 0.732 | 0.529 | 0.392 | 0.275 | | vs. KTO-SD1.5 | 0.272 | 0.192 | 0.099 | 0.209 | 0.161 | Supp Table 5: Cohen's d for comparison between SmPO-SDXL and baselines on HPD v2 test set. | | PickScore | HPSv2.1 | ImageReward | Aesthetic | CLIP | | ------------- | --------- | ------- | ----------- | --------- | ----- | | vs. SDXL | 0.601 | 0.914 | 0.498 | 0.206 | 0.124 | | vs. SFT-SDXL | 1.084 | 0.957 | 0.647 | 0.419 | 0.272 | | vs. DPO-SDXL | 0.336 | 0.527 | 0.192 | 0.219 | 0.012 | | vs. MaPO-SDXL | 0.537 | 0.772 | 0.162 | 0.021 | 0.149 | Supp Table 6: Cohen's d for comparison between SmPO-SD1.5 and baselines on HPD v2 test set. | | PickScore | HPSv2.1 | ImageReward | Aesthetic | CLIP | | ------------- | --------- | ------- | ----------- | --------- | ----- | | vs. SD1.5 | 1.007 | 1.360 | 0.946 | 0.778 | 0.473 | | vs. SFT-SD1.5 | 0.362 | 0.181 | 0.165 | 0.201 | 0.185 | | vs. DPO-SD1.5 | 0.641 | 0.979 | 0.686 | 0.481 | 0.327 | | vs. KTO-SD1.5 | 0.413 | 0.246 | 0.173 | 0.244 | 0.246 | --- Rebuttal Comment 1.1: Comment: Thanks for the response. I keep my recommendation for acceptance
Summary: The paper introduces SmPO-Diffusion, an approach for aligning text-to-image diffusion models with AI preferences by refining the Direct Preference Optimization framework. Instead of using a binary preference, the authors propose a smoothed preference distribution based on a reward model. Claims And Evidence: 1. Smoothed Preference Modeling. Smoothed Labeling has been used in DPO in 2023. See *Essential References Not Discussed*. 2. Optimization via Renoise Inversion. There is no optimality analysis. Methods And Evaluation Criteria: For evaluation, Image Reward, and Aesthetic score could also be considered as benchmarks. Theoretical Claims: There is no formal proof in this paper. Experimental Designs Or Analyses: The ablation study on hyperparameters are not comprehensive. Supplementary Material: Yes, Appendix B - G. Relation To Broader Scientific Literature: I cannot understand what the contribution is in this paper. It seems that this paper changes the hyperparameter $\beta$ to $(2\alpha -\gamma)\beta$, and tunes the hypermeters on specific datasets. That is all. Essential References Not Discussed: 1. Lack of comparison and discussion with many Diffusion DPO variants. 2. Label smoothing has been proposed by the DPO authors in 2023 [1]. [1] A note on DPO with noisy preferences & relationship to IPO. https://ericmitchell.ai/cdpo.pdf Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. I cannot understand what the contribution is but adding more hyperparameters in the loss function. I would more than appreciate it if the authors can clarify it. 2. It seems that $\gamma$ can be absorb in $\beta$ by $({2\alpha} - \gamma)\beta = (\frac{2\alpha}{\gamma} - 1)\gamma\beta$ and $\frac{\alpha}{\gamma}$ is calculated by equation (12). Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your feedback and we'll do our utmost to resolve your concerns. --- **Q1:** *For evaluation, Image Reward, and Aesthetic score could also be considered.* **A1:** We have incorporated both metrics: **Image Reward** and **Aesthetic Score** are reported in **Table 2,8 and 9** (Quantitive comparison), **Table 3-7** (Ablation studies) and discussed in Line 262-263 of **Section 5.1**. --- **Q2:** *There is no formal proof in this paper.* **A2:** The detailed derivation of SmPO loss is provided in **Appendix B**. --- **Q3:** *The ablation study on hyperparameters are not comprehensive.* **A3:** Our **ablation studies** on hyperparameters were systematically designed along five key dimensions to validate our method’s robustness: + **Component Effectiveness: Table 3** shows each module's contribution through progressive ablation. + **DDIM Inversion Steps: Table 4** shows step selection's impact on output quality and training efficiency. + **$(\gamma,\beta)$ combinations : Table 5** shows hyperparameter interaction effects. + **CFG in Inversion: Table 6** shows CFG's influence during DDIM inversion. + **Weight-to-Sensitivity Ratio $\frac{\alpha}{\gamma}$: Table 7** shows reward-model derived calculations. --- **Q4:** *Lack of comparison and discussion with many Diffusion DPO variants.* **A4:** Thank you for your feedback. We've compared our method with key baselines: **SFT, Diffusion-DPO,** its variants (**Diffusion-KTO and MaPO**). Additional **SPO** comparison are included (see response **A3** to Reviewer UNNR). To our knowledge, these are the most relevant and publicly available baselines. We welcome suggestions for additional baselines with references. --- **Q5:** *Label smoothing has been proposed by the DPO authors in 2023, Conservative DPO (cDPO).* **A5:** Thank you for your feedback. In fact, cDPO **differs** fundamentally from our proposed SmPO. 1. cDPO focuses on noisy labels (where labels may be **flipped** with some probability) and applies a **linear weighting** of swapped DPO losses: $L_{cDPO}(x_0^{w},x_0^{l})=(1-\epsilon)L_{DPO}(x_0^{w},x_0^{l})+\epsilon L_{DPO}(x_0^{l},x_0^{w})$. Our SmPO assigns **fine-grained** labels to each image pair and incorporates them into the DPO loss function through distribution averaging, yielding Equ (9). 2. In cDPO, $\epsilon$ is a is a **fixed** manually-tuned hyperparameter (refer to Line 50 and 84 of [1]), whereas our SmPO introduces reward-adaptive smoothing - zeroing loss for similar pairs, amplifying gradients for dissimilar ones. In other words, distinct image pairs are assigned different weights. As such, in Equ (12), $\frac{\alpha}{\gamma}$ could be more precisely represented as $\frac{\alpha}{\gamma}(x_{0}^{w},x_{0}^{l})$. [1] https://github.com/eric-mitchell/direct-preference-optimization/blob/main/trainers.py --- **Q6:** *I cannot understand what the contribution is.* **A6:** We appreciate your feedback and are happy to clarify our key contributions: 1. **Smoothed Preference:** We propose a novel *smoothed preference distribution* replacing DPO's binary modeling. Unlike fixed-weight approaches, our *(2α-γ)* scaling factor acts as a **pair-dependent dynamic regulator**, automatically driving loss to zero for similar pairs while amplifying gradients for dissimilar ones with each image pair's reward signal. 2. **Precision Optimization:** Unlike Diffusion-DPO's random noise injection, our *Renoise Inversion* technique enables **direct optimization of image-correlated variables.** This contribution provides more stable training and higher efficiency. --- **Q7:** *It seems that $\gamma$ can be absorb in $\beta$ by $(2\alpha-\gamma)\beta=(\frac{2\alpha}{\gamma}-1)\gamma\beta$ is calculated by equation (12).* **A7:** We appreciate this observation. The term $(2\alpha-\gamma)$ is *image-pair-dependent*, automatically adjusted based on reward differences. The core idea of $(2\alpha-\gamma)$ is to ensure that *"the loss decreases when preferences are more similar, and increases otherwise."* **This can be implemented via different parameterization strategies.** For example, $\gamma$ can be step- or epoch-aware, adjusting dynamically to avoid being absorbed into $\beta$. We adopt a simple formulation: $(2\alpha-\gamma)$ is implemented as $(\frac{2\alpha}{\gamma}-1)\gamma$, balancing adaptability and numerical stability. Ablations (Table 5) validate the impact of different $(\gamma,\beta)$ pair configurations. --- **Q8:** *Optimization via Renoise Inversion. There is no optimality analysis.* **A8:** Thank you for your feedback. Optimizing Equ (10) requires sampling $x_{1:T}\sim p_{\theta}^{c}(x_{1:T}|x_{0})$. While Diffusion-DPO approximates this with $q(x_{1:T}|x_{0})$, we argue precise $x_{0}$ reconstruction needs more accurate latents. This motivates our use of diffusion reconstruction methods (DDIM Inversion), supported by additional theoretical analysis (refer to response **A2** to Reviewer 5qKt).
Summary: This paper proposes a post-training method for diffusion models, named SmPO, which is modified from diffusion-dpo. SmPO recognize the variability of human preferences by replacing binary preferences with smoothed preference distributions, thereby mitigating label bias. In addition, Renoise Inversion method is employed to estimate the sampling trajectory. Compared to previous methods that randomly sample noise from a Gaussian distribution, this inversion method provides a more accurate estimation of the trajectory preference distribution. Extensive experiments demonstrate the strong performance of SmPO and the effectiveness of the proposed modules. Claims And Evidence: Claim: Human preference is variable. Simply considering binary preference of a pair of image causes label bias. Evidence: Exp in Table 3 Claim: inversion based method can provide better estimation of the trajectory preference distribution. Evidence: Exp in Table 3 Methods And Evaluation Criteria: yes Theoretical Claims: I have reviewed the derivation of the loss function in equation 16, and it appears correct to me. However, I will also consider the opinions of other reviewers. Experimental Designs Or Analyses: 1. Computational cost comparison in Table 1: The GPU hours for SmPO-SDXL should include the training time of the reward model, as other methods rely only on preference pair data, whereas SmPO-SDXL requires a reward model. 2. There are no experiments validating the design of the weight-to-sensitivity ratio. 3. In the qualitative results, the image-text alignment appears to be improved. Could you provide quantitative results on the GenEval [1] benchmark to verify this? [1] Ghosh, Dhruba, Hannaneh Hajishirzi, and Ludwig Schmidt. "Geneval: An object-focused framework for evaluating text-to-image alignment." Advances in Neural Information Processing Systems 36 (2023): 52132-52152. Supplementary Material: I have reviewed the user study, derivation, and additional results parts. Relation To Broader Scientific Literature: 1. The standard practice of previous dpo-based methods [1,2,3] is to use binary label. This paper proposes a new approach that instead employs smoothed preference distributions and demonstrates the effectiveness of this replacement through experiments 2. While using q(x_{1:T}|x_0) to approximate the reverse process is proposed by [1] and is widely accepted by the community, this paper proposes using inversion to achieve a more accurate estimation. [1] Wallace, Bram, et al. "Diffusion model alignment using direct preference optimization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [2] Yang, Kai, et al. "Using human feedback to fine-tune diffusion models without any reward model." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [3] Yang, Shentao, Tianqi Chen, and Mingyuan Zhou. "A dense reward view on aligning text-to-image diffusion with preference." arXiv preprint arXiv:2402.08265 (2024). Essential References Not Discussed: I believe the following papers should be mentioned in the related works section or even compared against. [1] Deng, Fei, et al. "Prdp: Proximal reward difference prediction for large-scale reward finetuning of diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [2] Liang, Zhanhao, et al. "Step-aware preference optimization: Aligning preference with denoising performance at each step." arXiv preprint arXiv:2406.04314 2.5 (2024): 7. [3] Karthik, Shyamgopal, et al. "Scalable ranked preference optimization for text-to-image generation." arXiv preprint arXiv:2410.18013 (2024). Other Strengths And Weaknesses: The rationale for designing the weight-to-sensitivity ratio as given in Equation 12 is unclear. Other Comments Or Suggestions: I will modify the score according to the opinions of other reviewers and the author's response. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are honored by your favorable evaluation and have carefully considered your suggestions! --- **Q1**: *The rationale for designing the weight-to-sensitivity ratio as given in Equ (12) is unclear.* **A1**: Thank you for your feedback! According to Equ (8), $\tilde{p}(x_{0}^{w}|c) = \frac{p(x_{0}^{w}|c)^{\alpha}p(x_{0}^{l}|c)^{\gamma-\alpha}}{Z_{p}^{w}(c)}=\frac{(p(x_{0}^{w}|c)^{\frac{\alpha}{\gamma}}p(x_{0}^{l}|c)^{1-\frac{\alpha}{\gamma}})^{\gamma}}{Z_{p}^{w}(c)}$ where weight-to-sensitivity ratio could be regarded as the pairwise preference probability $p(x_{0}^{w} \succ x_{0}^{l}|c)$. Since a pairwise preference is hard to model directly, we adopt the well-established Bradley-Terry framework through the reward model as Equ (12). --- **Q2:** *The GPU hours for SmPO should include the training time of the reward model.* **A2**: Thank you for the feedback! Training PickScore requires **8×A800 GPUs for ~40 minutes** [1]. We have updated **Table 1** to include this cost. Supp Table 1: Computational cost comparison | Model | GPU Hours | | --------- | ---------------------- | | DPO-SDXL | ~976.0 | | MaPO-SDXL | ~834.4 | | SmPO-SDXL | ~**150.8 (145.5+5.3)** | | Model | GPU Hours | | ---------- | -------------------- | | DPO-SD1.5 | ~204.8 | | KTO-SD1.5 | ~1056.0 | | SmPO-SD1.5 | ~**41.3 (36.0+5.3)** | [1] https://github.com/yuvalkirstain/PickScore --- **Q3:** *PRDP, SPO and RankDPO should be included.* **A3:** We appreciate the suggestion! We'll update our related work with these papers and include the SmPO-SPO comparison. Supp Table 2: Median score comparison of SD1.5 on HPD v2. | | PickScore | HPSv2.1 | ImageReward | Aesthetic | CLIP | | ---------- | --------- | --------- | ----------- | --------- | --------- | | SPO-SD1.5 | 21.49 | 26.74 | 0.181 | 5.655 | 32.513 | | SmPO-SD1.5 | **23.62** | **32.53** | **1.331** | **6.264** | **38.88** | Supp Table 3: Win-rate comparison on Parti-Prompts. | | PickScore | HPSv2.1 | ImReward | Aesthetic | CLIP | | ----------------------- | --------- | ------- | -------- | --------- | ----- | | SmPO-SD1.5 vs SPO-SD1.5 | 70.81 | 75.47 | 76.62 | 65.03 | 79.96 | | SmPO-SDXL vs SPO-SDXL | 58.37 | 52.44 | 63.72 | 52.36 | 74.25 | **However, Comparisons with PRDP (different SD-v1.4 base) and RankDPO (different training data) are limited by the unavailability of checkpoints.** --- **Q4:** *There are no experiments validating the design of the weight-to-sensitivity ratio.* **A4:** Thank you for the feedback. We have designed two experiments to verify this aspect: 1. **Ablation Study (Table 3):** The results demonstrate that incorporating our PickScore-based weight-to-sensitivity ratio (*+Smoothed*) leads to **significant performance gains**, validating its effectiveness. 2. **Reward Model Analysis (Table 7):** We have further compared the weight-to-sensitivity ratio with different reward models, providing additional empirical support for our design choice. --- **Q5:** *Could you provide quantitative results on the GenEval benchmark?* **A5:** Yes! Thank you for highlighting this important aspect. We have conducted quantitative results on the GenEval. Supp Table 4: GenEval scores over SD1.5 baselines. | Model | single object | two object | counting | Attribute binding | position | colors | overall | | ----- | ------------- | ---------- | -------- | ----------------- | -------- | -------- | -------- | | SD1.5 | **0.96** | 0.38 | 0.35 | 0.04 | 0.03 | 0.76 | 0.42 | | DPO | **0.96** | 0.40 | 0.38 | 0.05 | 0.04 | 0.77 | 0.43 | | KTO | **0.96** | 0.44 | 0.39 | **0.08** | 0.07 | 0.78 | 0.45 | | SPO | **0.96** | 0.35 | 0.36 | 0.06 | 0.05 | 0.77 | 0.43 | | SmPO | **0.96** | **0.52** | **0.40** | **0.08** | **0.08** | **0.80** | **0.47** | Supp Table 5: GenEval scores over SDXL baselines. | Model | single object | two object | counting | Attribute binding | position | colors | overall | | ----- | ------------- | ---------- | -------- | ----------------- | -------- | -------- | -------- | | SDXL | 0.97 | 0.70 | 0.41 | 0.22 | 0.10 | 0.87 | 0.55 | | DPO | 0.98 | **0.80** | 0.45 | 0.24 | 0.11 | 0.88 | 0.58 | | MaPO | 0.95 | 0.70 | 0.36 | 0.26 | 0.10 | 0.88 | 0.54 | | SPO | 0.97 | 0.73 | 0.38 | 0.17 | 0.10 | 0.86 | 0.53 | | SmPO | **0.99** | **0.80** | **0.46** | **0.30** | **0.15** | **0.89** | **0.60** | Experimental results indicate that SmPO consistently enhances image-text alignment performance.
null
null
null
null
null
null
On Explaining Equivariant Graph Networks via Improved Relevance Propagation
Accept (poster)
Summary: The paper introduces a novel method called EquiGX aimed at enhancing the explainability of equivariant graph neural networks (GNNs) specifically designed for 3D geometric graphs via the deep Taylor decomposition framework. In the initial version, the authors used incorrect notations (e.g., the relevance score for spherical harmonics in Eq. (7)), which led to a big misunderstanding, so I questioned whether the model might violate equivariance. This has been clarified during the discussion, and I have raised my score from "clear reject (1)" to "clear accept (4)". Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: After resolving the symbol misunderstanding, no other theoretical problems were found. Experimental Designs Or Analyses: Yes. Supplementary Material: I checked the demo notebook provided by the author during the rebuttal. Relation To Broader Scientific Literature: The subject of this article is "Equivariant Graph Networks", and the article only discusses TFN. After discussion, the authors agreed to open a "Future Work" section to discuss the following models: - Invariant Models: SchNet, SphereNet, ComENet - Scalarization-based Models: EGNN, SaVeNet, LEFTNet - Tensor-product-based Models: EquiformerV2, MACE, PACE - Spherical-scalarization Models: SO3KRATES, HEGNN, GotenNet SchNet: A continuous-filter convolutional neural network for modeling quantum interactions, NIPS'17. SphereNet: Spherical Message Passing for 3D Molecular Graphs, ICLR'22. ComENet: Towards Complete and Efficient Message Passing for 3D Molecular Graphs, NeurIPS'22. EGNN: E(n) Equivariant Graph Neural Networks, ICML'21. SaVeNet: A Scalable Vector Network for Enhanced Molecular Representation Learning, NeurIPS'23. LEFTNet: A new perspective on building efficient and expressive 3D equivariant graph neural networks, NeurIPS'23. EquiformerV2: EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations, ICLR'24. MACE: MACE: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields, NeurIPS'22. PACE: Equivariant Graph Network Approximations of High-Degree Polynomials for Force Field Prediction, TMLR'24. SO3KRATES: A Euclidean transformer for fast and stable machine learned force fields, NC'2408. HEGNN: Are High-Degree Representations Really Unnecessary in Equivariant Graph Neural Networks?, NeurIPS'24. GotenNet: Rethinking Efficient 3D Equivariant Graph Neural Networks, ICLR'25. Essential References Not Discussed: The calculation of the relevance scores for spherical harmonics in Eq. (7) is very similar to the core formula of spherical-scalarization models (e.g. SO3KRATES, HEGNN, GotenNet), which may explain why this method makes sense. Other Strengths And Weaknesses: All the problems have been solved, which are mainly due to misunderstandings caused by symbols. Therefore, I recommend that the authors revise their writing using the symbol system of HEGNN and GotenNet to avoid readers' misunderstanding of scalars and tensors. The authors said that they will accept this suggestion in the subsequent revision of the manuscript. Although ICML does not allow submission of new PDFs, I choose to believe that the authors will make appropriate changes. Other Comments Or Suggestions: - [Line 76, right] $\mathcal{C}\_{(\ell\_1, m\_1),(\ell\_2, m\_1)}^{(\ell\_3, m\_3)}$ in Eq. (2) should be $\mathcal{C}\_{(\ell\_1, m\_1),(\ell\_2, m\_2)}^{(\ell\_3, m\_3)}$. - [Line 205, left] $d_ij$ should be $d_{ij}$. Questions For Authors: See review above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer 6J4F for comments on the paper. We have provided pointwise responses below. >**According to the formula in this paper, the contribution of the atom may change with the reference frame in which the molecule is observed.** We believe there is a misunderstanding. The node explanations by EquiGX are invariant to rotations. In other words, the node importance scores from EquiGX remain unchanged when the input molecule is rotated by a random rotation matrix. Therefore, EquiGX preserves the equivariance of the network. We have also conducted experiments to verify and confirm this invariance. >**Can EquiGX be extended to other equivariant models?** EquiGX can be readily extended to other spherical equivariant models, such as EquiformerV2, MACE, and PACE, which also rely on tensor product operations. However, due to the significant computational cost, we leave such extensions for future work. We would also like to point out that models like SphereNet and ComENet are different, as they operate solely on invariant features without incorporating equivariant ones, and thus do not rely on tensor product operations. We believe these models are beyond the scope of this paper. >**In line 205, $d_i,j$ should be $d_{i,j}.$ In Eq 2, $C^{(\ell_3,m_3)}_{(\ell_1,m_1),(\ell_2,m_1)}$ should be $(\ell_2,m_2)$.** Thanks for pointing out the typos. We will update the manuscripts. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses, but they were very brief and I was not able to resolve my queries at all. In addition, I would like to ask other reviewers to check if the authors' model is equivariant. I don't see its equivariance at all, but other reviewers have not commented on this. I tried to implement the author's formula, and the result is still not equivariant. > **R1. About Equivariance.** The authors claim that "node explanations by EquiGX are invariant to rotations", but I don't see any proof of equivariance/invariance. I would like to point out that since the authors' dataset seems to contain a variety of poses, the model may have learned equivariance driven by the data (rather than from the architecture). Therefore, the experimental verification of equivariance is not convincing. I think the following should be added (all of them): - Rigorous theoretical proof, especially for all formulas using Hadamard multiplication and division. - Equivariant loss results for randomly rotated inputs tested on **untrained models**. - Open source code for reproducing the results. Since my own implementation shows that the algorithm is not equivariant, I am not sure if there is some omission in my implementation. I urge the authors to open source their code to check reproducibility, or a notebook demo. > **R2. About More Models to Analysis.** I think it is unreasonable that the author claims that generalization is easy, but refuses to add analysis of other models in this paper. Especially in fact, scalarization-based models such as EGNN, LEFTNet and spherical-scalarization models such as HEGNN and GotenNet are also special cases of tensor products (and invariant models, '0e'x'0e'->'0e' is of course a special case of tensor products, see e3nn). By analogy, the author casually said that they can expand the analysis, but in fact it is not convincing. Considering the "Equivariant Graph Networks" in the title of the paper, I think it is very inappropriate that the paper only includes the analysis of a single TFN model. **[2nd Updated] Oh, I suddenly found a way to add replies, I hope the authors can notice it. It seems that everyone can directly modify the second reply to achieve interaction.** > **R3. New Guesses and Suggestions** I re-conjectured according to Reviewer V1gi's suggestion. If I understand correctly, $M^{(l_3)}$ here represents a scalar coefficient about $l_3$th-degree steerable features, which is actually an invariant. So, at this time, let me take the last line of Eq. (7) as an example to judge, the situation is as follows: the two scalars are used in Hadamard division, so the result they get is still a scalar. The latter uses Hadamard product, omitting the summation sign mentioned by the author in the reply, so it is actually the inner product of the two, similar to Eq. (6) in HEGNN and Eq. (11) in GotenNet. So, the whole formula should actually be described as follows: $$ \mathcal{R}(Y^{(l\_2)}(\vec{\boldsymbol{r}}_\{ij})=\sum\_{l\_3}\left(\frac{1}{3}\mathcal{R}(\boldsymbol{M}\_{i\to j}^{(l_3)})\oslash \boldsymbol{M}\_{i\to j}^{(l_3)}\right)^{\top}\left\langle\frac{\partial \boldsymbol{M}\_{i\to j}^{(l_3)}}{\partial\tilde{\boldsymbol{v}}\_{ij}^{(l_2)}}, \tilde{\boldsymbol{v}}\_{ij}^{(l_2)}\right\rangle, $$ where $\tilde{\boldsymbol{v}}\_{ij}=Y^{(l)}(\vec{\boldsymbol{r}}\_{ij})$ is an $l_2$th-degree steerable features, and others are all scalars. If so, then the problem is clear, but I am not sure if the authors think so. It seems that the main reason for such a big misunderstanding is that the author omitted the very important summation sign and the symbol system is very confusing. I think it is necessary to modify it. If my speculation this time is correct, this is probably a promising way. Because this formula is actually the core formula of spherical-scalarization models, this may also explain to some extent why these new models are superior. I suggest that the authors can add more discussion in this regard. I would raise my rating if the authors later confirm that my conjecture is correct and revise their formula. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We provide further clarification regarding your concerns below. > **About Equivariance.** Thank you for the suggestion. We have provided a notebook demo to illustrate both the equivariance of the model and the invariance of EquiGX. The notebook is available at the following link: [https://anonymous.4open.science/r/EquiGX_Demo-5797/TestEquivariance.ipynb]. We would like to clarify that the model's equivariance is not learned from data, but is inherently guaranteed by the model's architectural design. In the demo, we use a randomly initialized model and compare the outputs for inputs that are randomly rotated. The difference in outputs is less than 1e-6, which is a negligible numerical difference. This confirms that the model is intrinsically equivariant, rather than learning equivariance from the dataset. Additionally, we demonstrate that the node importance scores computed by EquiGX are invariant to input rotations. In other words, the node importance scores remain unchanged when random rotations are applied to the inputs. This invariance is also verified on randomly initialized models, further avoiding any potential reliance on learned equivariance or invariance. Finally, we would like to emphasize that, when computing the node importance scores, we sum the relevance scores across the feature dimensions. Thus, the use of Hadamard multiplication and division does not break the invariance of the node importance scores. Overall, our experimental verification strongly supports the model's equivariance and EquiGX's invariance. >**About More Models for Analysis.** We are open to extending EquiGX to additional backbone models. We also agree that invariant models can be seen as a special case of tensor products. However, the widely adopted implementations of these models are typically not based on tensor products via the e3nn library. Incorporating them as backbones would require re-implementing existing models using tensor product operations, which is a non-trivial task. In addition, training these models is time-consuming and would demand substantial engineering effort, making it infeasible within the limited timeframe of the rebuttal period. We hope the reviewer understands these constraints.
Summary: Explaining equivariant GNNs for 3D geometric graphs is challenging due to their complex architectures and the difficulty of handling positional data. Existing explainability (XAI) methods mainly focus on 2D graphs and struggle to adapt to equivariant GNNs. To address this, this paper introduces the EquiGX, a novel explanation framework based on Deep Taylor decomposition, extending layer-wise relevance propagation for spherical equivariant GNNs. It decomposes prediction scores and back-propagates relevance through each layer, providing detailed insights into how geometric and positional data influence model decisions. Experiments on synthetic and real-world datasets show that EquiGX effectively identifies critical geometric structures and outperforms existing baselines, offering significantly improved explanations for equivariant GNNs. ## Update after rebuttal The authors have solved most of my questions. I will maintain my score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: See the weaknesses. Experimental Designs Or Analyses: See the weaknesses. Supplementary Material: Appendix Relation To Broader Scientific Literature: See the summary. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths Refer to the summary. Weaknesses This paper presents an insightful analysis of LRP and its extension to equivariant GNNs. This is an interesting topic and the framework might be helpfu for other researches on equivariant GNNs. However, several details could be further refined for clarity and completeness. 1. Figure 1 is confusing. It is recommended to explicitly illustrate the cube motif and highlight the data points with high importance within the motif. Additionally, the meaning of "the other areas" in the figure caption should be clarified. 2. Clarification on importance calculation. The authors separate the layer-wise relevance into TP-based and invariant feature-based propagation. Are these components eventually merged? If so, what is the weighting scheme between equivariant and invariant features? Providing a clear explanation would enhance understanding. 3. Figures 3 and 4, while visually compelling, are somewhat unclear. For instance, in Figure 3, the authors state: "Since the sample is an all-beta protein, ideally the β-sheets should have high importance scores, i.e., be red in the figure." However, it is unclear which part of the visualization corresponds to β-sheets. Additionally, the LRI-Bern method results in an entirely red visualization—does this indicate it is the most effective? The authors should provide a more precise explanation. Other Comments Or Suggestions: No. Questions For Authors: See the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are very glad Reviewer ffwb had a positive initial impression and appreciate your constructive comments. We provide pointwise responses below. >**Figure 1 is confusing.** We apologize for the confusion. In Figure 1, the ground truth is shown in the upper-left corner. Nodes forming the cube motif are highlighted in red. Across all examples, node positions remain consistent, so better alignment with the ground truth reflects a more accurate explanation. In the figure caption, “the other areas” refers to nodes that do not belong to the cube motif. In other words, these nodes form a pyramid shape. We also highlight the nodes in blue in the ground truth. >**Clarification on importance calculation.** As described in Section 3.2, for a single TP-based message passing layer, we decompose the relevance score of each message into three components, including the hidden features, the directional part, and the distance part. The relevance score attributed to the hidden features is further backpropagated toward the input. The relevance scores of the directional and distance components do not propagate beyond this layer. Within each message passing layer, the scores assigned to edge direction and edge distance capture their respective contributions to the final prediction. To compute overall contributions of edge directions and edge distances, we sum these directional and distance relevance scores across all layers. To obtain the final importance score for each input node, we combine the relevance score of the node itself with the relevance scores of its connected edges. In other words, the node-level explanation is calculated as the sum of a node’s own relevance plus half of the relevance of its neighboring edges. >**Figures 3 and 4, while visually compelling, are somewhat unclear. It is unclear which part of the visualization corresponds to β-sheets.** Beta sheets typically appear as flat, arrow-shaped ribbons pointing in a specific direction, often aligned side-by-side to form sheet-like structures. In Figure 3, the regions with arrows represent the beta-sheets. For reference, the red arrows in this [Wikipedia image](https://en.wikipedia.org/wiki/Beta_sheet#/media/File:5CPAgood.png) illustrate their typical appearance. We will revise the manuscript to clarify this in the figure captions and text. Additionally, we would like to point out that the entirely red visualization produced by the LRI-Bern method does not necessarily imply higher effectiveness. An ideal explanation should selectively highlight the beta-sheets in red, while keeping less relevant regions such as the coiled structure near the top and the thin, string-like lines in blue.
Summary: The paper proposes a new method, EquiGX, to explain equivariant GNNs. The method is based on Deep Taylor Decomposition and extends it to perform layer-wise relevance propagation for spherical equivariant GNNs. Specifically, the authors propose new rules to attribute tensor product operations. The experiments show that EquiGX performs better than existing baselines. Claims And Evidence: The claims are well supported. Methods And Evaluation Criteria: The synthetic and real-world datasets make sense and evaluation criteria are common choices for model explainability. Theoretical Claims: Yes I checked the equations and didn't find any issue. Experimental Designs Or Analyses: The experiments are well designed and the paper include qualitative and quantitative evaluations. Supplementary Material: No Relation To Broader Scientific Literature: The paper extends LRP to 3D equivariant GNNs and proposes a new LRP rule for tensor product operations, which can be useful for other works that use equivariant GNNs or tensor product operations. Essential References Not Discussed: Not found. Other Strengths And Weaknesses: Strengths: 1. The method is well-founded theoretically, and the derivation is clear. 2. The writing is easy to follow. 3. Including real-world protein datasets in the experiments is a nice addition. Weaknesses: 1. The method is limited to TP-based models. 2. The source code is not provided. 3. The limitations are not discussed. Other Comments Or Suggestions: The Shapes and Spiral Noise datasets are very similar, and their visualizations look a bit repetitive. It might be better to show the visualization of one and move the other to the appendix. The same applies to the protein datasets. Questions For Authors: 1. The experiments focus on graph-level prediction tasks. Why not include node-level or edge-level predictions? 2. Why isn’t ActsTrack included in the qualitative evaluation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are very glad Reviewer KEDe had a positive initial impression and appreciate your constructive comments. We provide pointwise responses below. >**The method is limited to TP-based models.** We admit that our method is focusing on spherical equivariant GNNs (TP-based models) and leaving generalization to other architectures as future work. However, we would like to highlight that TP-based equivariant networks represent a broad and widely adopted family of models in the AI for science domain. Developing explainability techniques for these models is important, as it can enhance our understanding of existing architectures and potentially lead to their improvement. >**The limitations are not discussed.** Thanks for the suggestion. One limitation of our method is its current focus on tensor product operations, specifically within spherical equivariant GNNs. While this represents a subset of equivariant GNNs, it is a significant and impactful area. Consequently, our method does not yet extend to all types of equivariant GNNs. However, this focus allows for a deep and thorough exploration of tensor product operations, and we plan to generalize our approach to other architectures in future work. >**The Shapes and Spiral Noise datasets are very similar, and their visualizations look a bit repetitive. It might be better to show the visualization of one and move the other to the appendix.** Thanks for the suggestion. We would update the visualization during camera-ready stage. >**The experiments focus on graph-level prediction tasks. Why not include node-level or edge-level predictions?** Evaluating explainability in equivariant networks is inherently challenging, largely due to the difficulty of obtaining meaningful and verifiable ground truth for explanations. In this paper, we address this by constructing synthetic datasets and carefully selecting real-world datasets where such ground truths are available. However, edge-level tasks like force prediction present additional complexity. While molecular dynamics can provide an approximate reference, the underlying rationale behind force prediction remains an open research question. Thus, we leave the exploration of node-level and edge-level tasks to future work. >**Why isn’t ActsTrack included in the qualitative evaluation?** ActsTrack has an average of over 100 nodes per graph, and unlike proteins, it lacks a well-established visualization standard. Additionally, the ground-truth explanations are not immediately obvious, making clear and interpretable visualizations more challenging. As a result, including ActsTrack in the qualitative evaluation could lead to confusing visuals. We plan to include it in the paper later with more refined visualization methods. >**The source code is not provided.** We plan to release our code during camera-ready stage.
Summary: The paper introduces EquiGX, an explanation method for (3D) equivariant GNNs. Existing graph explanation methods mainly focus on 2D GNNs and struggle to explain 3D GNNs. EquiGX extends Deep Taylor decomposition to derive layer-wise relevance propagation (LRP) rules for equivariant GNNs. The method is evaluated on synthetic and real-world datasets, outperforming existing explanation methods. ## update after rebuttal After reading the other reviewers' comments and engaging in discussions with them, I now recognize that this work makes a highly significant contribution toward explaining tensor-product GNNs. Its impact goes beyond simply clarifying how TP-GNNs make decisions—it also provides insight into why TP-GNNs outperform invariant GNNs. The perspectives offered in this paper represent a fundamental advancement in the field, with the potential for broader impact. In light of these considerations, I have raised my score from 3 to 4. Claims And Evidence: Yes, the main claims are all clear with convincing evidence. Methods And Evaluation Criteria: Yes, the method and evaluation criteria make sense. Theoretical Claims: I checked the derivations of the layer-wise decomposition in the main paper. I believe that they are correct. Experimental Designs Or Analyses: Yes, I read through all of Sec. 4 (Experiments), and I don't see any issues. Supplementary Material: I briefly checked the Appendix. Relation To Broader Scientific Literature: It is highly relevant to the field of AI for science. Essential References Not Discussed: Not that I’m aware of. Other Strengths And Weaknesses: Strengths: - Significance: Though explanation methods for (2D) GNNs have advanced over the past few years, explaining equivariant 3D GNN models remains challenging. This is the first few works that target 3D GNN explanation. I appreciate this paper’s contribution in extending Deep Taylor decomposition to equivariant architectures, addressing the limitations of prior graph explanation methods for 3D geometric graphs. - Relevance: Understanding how equivariant GNNs utilize geometric and positional information is crucial for their application in AI for science, especially in molecular modeling. Weaknesses: - Technical Contribution: This paper is an extension of the LRP framework that has been widely applied for vision tasks. Although tensor-product networks are different from normal MLPs or CNNs, the technical contribution of this work could be limited. - Limited Backbone Models: Only one model, TFN, is tested. However, this could be due to the extensive computation requirement of equivariant GNNs. Other Comments Or Suggestions: 1. The overall presentation of this work could be improved, particularly by addressing issues such as the excessive white space around some equations. 2. I do think it would be beneficial to put more effort into discussing and presenting convincing arguments as to why existing (2D) GNN methods are not suited for explaining 3D-equivariant GNNs. Minor issues: 1. Line 205, $d_{ij}$ not $d_ij$ 2. Line 151, “One way to compute such relevance is to the whole neural network as a mathematical function and use the first-order term from the Taylor series expansion.” Should be “… is to treat the whole neural network as…” Questions For Authors: 1. Equivariant GNNs, including the ones referred to in the paper e.g. Equiformer, SE(3)-Transformer, are conducted on widely-used molecular datasets such as QM9 and MD17. Is there a specific reason why the experiments on these datasets are not conducted? 2. In the last paragraph of Sec. 2, the author claims that LRI treats the model as a black box and overlooks the equivariance of the model. Can the authors further explain this part? There are two methods, LRI-Gaussian and LRI-Bern, in the LRI paper, which one does this refer to? What does it mean for them to overlook the equivariance of the model? 3. In both Equations 2 and 5, shouldn’t the $\left(\ell_2, m_1\right)$ be $\left(\ell_2, m_2\right)$ in $\mathcal{C}_{\left(\ell_1, m_1\right),\left(\ell_2, m_1\right)}^{\left(\ell_3, m_3\right)}$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are very glad Reviewer V1gi had a positive initial impression and appreciate your constructive comments. We provide pointwise responses below. > **This paper is an extension of the LRP framework that has been widely applied for vision tasks. Although tensor-product networks are different from normal MLPs or CNNs, the technical contribution of this work could be limited.** We would like to humbly clarify our contribution. We admit that EquiGX is an extension of LRP rules for spherical equivariant graph neural networks. We would like to point out that there are currently no established LRP rules for spherical equivariant Graph Neural Networks. Besides, developing propagation rules for new architectures is a challenging task. For instance, references [1] and [2] extend LRP rules to transformers, and references [3] and [4] extend them to traditional 2D GNNs. Additionally, different approaches to using Taylor decomposition can yield various LRP rules, such as $z^+$ rule and $w^2$ rule. Furthermore, spherical equivariant GNNs, despite having aggregation operations similar to those of traditional GNNs, fundamentally differ due to their reliance on tensor product operations. We are the first to explicitly consider tensor product operations to develop new LRP rules for spherical equivariant GNNs. > **Limited Backbone Models: Only one model, TFN, is tested.** Given the widespread use of powerful spherical equivariant GNNs, understanding their key components, i.e. the Tensor Product (TP), is a fundamental step toward explainability in equivariant models. In this paper, we focus on TFN, the most classical and representative spherical equivariant GNN. Our method is general and can be extended to other spherical equivariant GNNs. However, due to the high computational cost, we leave such extensions to future work. > **Why existing (2D) GNN methods are not suited for explaining 3D-equivariant GNNs?** Thanks for the question. Most existing 2D GNN explanation methods are developed for graph data that capture only topological relationships. However, 3D equivariant GNNs go beyond topology by incorporating rich geometric information, such as interatomic distances, angles, and torsion angles, which are essential for many tasks in scientific domains. These models are specifically designed to respect spatial symmetries, enabling consistent predictions under 3D transformations. 2D explanation methods typically do not consider atomic positions in space or spatial relationships such as angles and distances. As a result, they often fail to capture critical geometric cues. For instance, a small change in a node's 3D position can lead to significant shifts in angles or torsion angles, which may affect the model’s prediction. Since 2D methods are not sensitive to these changes, they tend to provide limited or misleading explanations when applied to 3D-equivariant GNNs. > **Equivariant GNNs are conducted on widely-used molecular datasets such as QM9 and MD17. Is there a specific reason why the experiments on these datasets are not conducted?** While Equivariant GNNs are commonly evaluated on datasets like QM9 and MD17, these datasets lack ground-truth explanations. The underlying rationale for quantum chemical properties and energy predictions is still an open research question, with no definitive consensus. As a result, these datasets are not well-suited for evaluating explainability methods. Therefore, in this paper, we construct synthetic datasets and carefully select real-world datasets where meaningful and verifiable explanations can be obtained. >**What does it mean for LRI treats the model as a black box and overlooks the equivariance of the model?** Both LRI-Gaussian and LRI-Bern, like other perturbation-based XAI methods, treat the model as a black box. This means they rely solely on input-output behavior without requiring access to the model’s internal parameters or gradients. In other words, they evaluate how changes in the input influence the output, without considering the internal structure of the model. Besides, LRI-Gaussian overlooks the equivariance of the model, because the learned Gaussian noise is not equivariant. Specifically, when the input point cloud is rotated, the learned noise does not necessarily rotate accordingly. As a result, the explanations can become sensitive to the choice of the input’s reference frame, which is undesirable for explaining equivariant models. >**In both Eq 2 and 5, should it be ($\ell_2,m_2$) in $C^{(\ell_3,m_3)}_{(\ell_1,m_1),(\ell_2,m_1)}$?** We apologize for the typo. Yes, it should be $(\ell_2,m_2)$. Reference [1] Transformer interpretability beyond attention visualization. CVPR 2021. [2] XAI for transformers: Better explanations through conservative propagation. ICML 2022. [3] Higher-order explanations of graph neural networks via relevant walks. TPAMI 2021. [4] Relevant walk search for explaining graph neural networks. ICML 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Most of my concerns have been resolved. After reading other reviewers' comments and discussion, this work can be an important step in explaining tensor product networks, especially why they are better than invariant networks. I have raised my score.
null
null
null
null
null
null
Unveiling Markov heads in Pretrained Language Models for Offline Reinforcement Learning
Accept (poster)
Summary: Previous works in the area of reinceforcement learning (RL)/foundation models have shown that pre-trained language models (PLMs) can enhance the performance of offline RL. This paper studies an important question: what kind of knowledge from PLMs has been transferred to RL to achieve such good results? They study the attention score distribution of decision-transformer (DT) and PLM-DT, and find that after training some iterations, the score distribution of DT is similar to the initial distribution of PLM-DT. They thus defined the concept of Markov heads, and further show that Markov heads are the key property transferred from PLM, both theoretically and empirically. Furthermore, they find that Markov heads are only beneficial for short-term environments, empirically. And they propose a general approach called GPT-DPMA, which show advantages over GPT-DT across both long-term/short-term tasks. Claims And Evidence: **claim 1**: the score distribution of DT is similar to the initial distribution of PLM-DT evidence: Figure 1, DT is only tested on one task, not enough **claim 2**: the markov heads will not change after fine-tuning evidence: Figure 4, only tested on one task, not enough **claim 3**: markov heads are beneficial for short-term tasks, while not for long-term tasks evidence: Table 3, solid **claim 4**: MoA can reduce the performance gap in long-term tasks evidence: Table 4, solid Methods And Evaluation Criteria: Yes, the proposed methods are practical, and the benchmark d4rl is commonly used. Theoretical Claims: I've checked the proofs for all theoretical results (Theorem 4.3 and Theorem 4.5). Though the proofs are correct, the results are not very strong. 1. The result that $\mathbb E[EAE^\top]$ is a markov matrix is not very useful. It would be better if the authors could prove that, w.h.p., $EAE^\top$ is a markov matrix. 2. The scale of $K$ is uncertain, and might be very small. Experimental Designs Or Analyses: I've checked the experimental parts. The experiments are conducted in a common way in this area. One issue is that some of the experiments are only conducted on one specific task, like hopper. More systematic experiments should be conducted. Supplementary Material: I have briefly looked at the supplementary material, which is the code implementation. But I have not checked the correctness of implementation. Relation To Broader Scientific Literature: I think this paper is important for this area. Many prior works [1,2,3] show the power of PLMs for downstream low-level RL tasks, however, this paper would be the first to offer a reasonable explanation for this phenomenon. Therefore, it would be of significant value for the study of foundation models for decision-making, if their claims are further systematically supported. [1] Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning. Shi et al. ICLR 2024. [2] Decomposed Prompt Decision Transformer for Efficient Unseen Task Generalization. Zheng et al. NeurIPS 2024. [3] Pre-trained Language Models Improve the Few-shot Prompt Ability of Decision Transformer. Yang et al. arXiv 2024. Essential References Not Discussed: I don't think there is any work essentially related to the work not cited. Other Strengths And Weaknesses: Strengths: - The paper is well-written. - The observation is novel. Weakness: - See my comments regarding to ``Claims And Evidence`` and ``Theoretical claims``. Other Comments Or Suggestions: no Questions For Authors: - On long-term tasks (Table 4), GPT-DTMA still cannot surpass DT. Could the author give some explanations? - Would the positional embedding play a role in yielding Markov heads? For example, if the positional embedding is relative, then the model might be trained to attend to nearest token. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review and feedback. We appreciate your positive comments about the novelty and presentation of our work. Please kindly find the response to your concerns below. **W1. For figure 1 and figure 4, DT is tested on only one task.** We have tested DT and GPT-DT on more tasks and the results are shown in Fig.1~10 at [[URL](https://anonymous.4open.science/r/submission9905-7D09/readme.md)]. The conclusion are consistent among different tasks. We will include these results from multiple tasks in the final version. **W2. Stronger results for Theorem 4.3** We extend our proof to show high probability bound. By Theorem 4.3 (page 4), we know $\frac{\mathbb{E}[D]}{\mathbb{E}[O]} > r$ for some $r>0$, where $D \triangleq \frac{1}{K} \sum_{i=1}^K |\Pi_{ii}|, O \triangleq \frac{1}{K(K-1)} \sum_{i \ne j} |\Pi_{ij}|, {\rm and\ } \Pi\triangleq E A E^T.$ We assume each element in embedding vector is i.i.d sampled from $\mathcal{N}(0,1)$. Then $|\Pi_{ij}|$ is sub-exponential with sub-Gaussian proxy $||A||_F$. By Bernstein inequality and Hanson-Wright inequality, we can get the bound for $\mathbb{P}\left( |D - \mathbb{E}[D]| \geq \varepsilon \right)$ and $\mathbb{P}\left( |O - \mathbb{E}[O]| \geq \varepsilon \right)$. By choosing $\varepsilon\triangleq \frac{\mathbb{E}[D] - r \mathbb{E}[O]}{2r} > 0$, we have $\frac{D}{O} > \frac{\mathbb{E}[D] - \varepsilon}{\mathbb{E}[O] + \varepsilon} > r$. Due to $\mathbb{P}(\frac{D}{O}\leq r) \leq \mathbb{P}(D < \mathbb{E}[D]-\varepsilon) + \mathbb{P}(O> \mathbb{E}[O] + \varepsilon) \leq \delta_0$, we can obtain the high-probability bound theorem. We will add the detailed theorem and proof in the final version. **W3. Explanations for the scale**$K$ Recall that in Theorem 4.5 (page 5), for any $K<\Big\lfloor\min\Big\\{\frac{\rho}{r+1}\frac{\overline{|A_{ij}^0|}}{\eta_0 B},\frac{A_{ii}^0}{\eta_0 B}{\rm for\ }i=1,\cdots,d\Big\\}\Big\rfloor$, Markov heads preserve. In remark 4.8, during the observations in experiments, we found that $\eta_0 = 1e-4$, $B = 1e-6$, $\overline{|A^0_{ij}|} = 1e-2$ and we set $r=20$ then $\rho = 3.61$. Since $A^0_{ii}$ are larger than $\overline{|A^0_{ij}|}$, we can ignore the second part in this inequality. By all the parameters, we can conclude that for any $K < 1.7 e7$, theorem 4.5 will be satisfied. In our setting, we let $K = 1e4$ which is within the theoretical range. **Q1. In long-term task, why GPT-DTMA cannot surpass DT?** In our theoretical results, we demonstrate that Markov heads tend to disproportionately attend to the final input token, thereby impairing long-term planning capabilities. Although the incorporation of a gating mechanism can attenuate the influence of these heads, it does not eliminate them entirely. Another contributing factor may be the difference in training corpora: DT is trained exclusively on a reinforcement learning corpus, whereas GPT-DTMA fine-tunes weights pre-trained on a language modeling corpus. Furthermore, the number of trainable parameters plays a nontrivial role in model performance. Consequently, DT tends to exhibit superior empirical performance. The primary motivation for proposing GPT-DTMA is to enhance the long-term planning ability of GPT-based decision transformers while maintaining lower computational costs by circumventing training from scratch. **Q2. Would the positional embedding play a role in yielding Markov heads?** We acknowledge that positional embeddings may also contribute to the model’s tendency to attend more to the final input token, however, positional embeddings are not directly related with Markov heads. It is important to note that neither DT nor GPT-DT employs positional embeddings. Our analysis is confined to the matrix product $W_q W_k^T$, which is independent of any positional encoding. We define $W_q W_k^T$ as the Markov matrix. In Theorem 4.3 (page 4), we demonstrate that for any random embedding matrix $E$, the transformed matrix $E W_q W_k^T E^T$ remains a Markov head in expectation. Thanks again for your constructive suggestions and we will include them in the final version. We are looking forward to further discussions. --- Rebuttal Comment 1.1: Comment: Thank you for the supplementary experiments and the intuition on theorem. I appreciate them and will increase my rating. Additionally, I think both DT and GPT-DT employ positional embeddings. Please see https://github.com/kzl/decision-transformer/blob/master/gym/decision_transformer/models/decision_transformer.py line 40 --- Reply to Comment 1.1.1: Comment: Thank you so much for your acknowledgment and for increasing the rating — we truly appreciate it! Regarding the position embedding, we would like to express more precisely: DT and GPT-DT do not use the original positional embeddings from GPT directly. Instead, they utilize timestep embeddings that serve as their form of positional encoding. These embeddings are also independent of Markov heads, which is consistent with our discussion in Q2. Thank you again for your thoughtful clarification!
Summary: The paper identifies Markov heads in Pretrained Language Models (PLMs), attention heads with extreme focus on the most recent token. These heads transfer to Decision Transformers (DTs) in offline Reinforcement Learning (RL), improving short-term (Markovian) tasks but harming long-term planning. The paper introduces GPT-DTMA, which uses a Mixture of Attention (MoA) mechanism to adaptively weigh heads, balancing short-term and long-term performance. Theoretical analysis proves Markov heads persist under fine-tuning and experiments confirm GPT-DTMA improves performance across various RL tasks. Claims And Evidence: I find the claims supported by convincing evidence. 1. **Markov heads exist in PLMs and aid short-term RL.** - The importance score shows GPT-DT's key heads have diagonal-dominant weight matrices. - GPT-DT outperforms standard DT in short-horizon MuJoCo tasks. - CLIP-DT lacks Markov heads and performs worse, confirming their significance. 2. **Markov heads hurt long-term planning** - GPT-DT underperforms on PointMaze/AntMaze (long-horizon), taking more steps. - GPT-DTMA's MoA down-weights Markov heads, improving performance in long tasks. 3. Markov heads persist through fine-tuning - Theorems show that Markov matrices remain dominant after small-gradient updates - Empirical results confirm that attention remains last-token-focused post finetuning. Methods And Evaluation Criteria: I find the methods and evaluation criteria make sense. ### Methods - Defines Markov heads as diagonal-dominant attention heads. - Identifies Markov heads by measuring zeroed-out head importance. - GPT-DTMA introduces gated MoA, letting the model adapt head weights dynamically. ### Evaluation - Short-horizon tasks: Standard offline MuJoCo locomotion tasks (normalized return) - Long-horizon tasks: Standard offline Ant-Maze, Point-Maze tasks (steps to goal) - Baselines: DT, GPT-DT, DTMA, GPT-DTMA, CLIP-DT, GPT-DTMA-R (penalizing Markov heads) - Robustness: mean and standard deviation over multiple seeds Theoretical Claims: This paper is not a fully empirical paper. To support their main claim about Markov heads, this paper provides two theorems, which I found reasonable with correct proofs. 1. Theorem 4.3: Markov heads retain their property under random projection (embedding changes). 2. Theorem 4.5: Markov heads remain stable under bounded-gradient finetuning. Experimental Designs Or Analyses: The experimental design and analyses seem valid. The only thing I would suggest is to make comparisons to non DT-based offline RL methods (e.g., Decision Convformer or value-based methods) ### Datasets - MuJoCo (short-term) - Point/Ant Maze (long-term) ### Key findings - GPT-DTMA matches GPT-DT in short-horizon tasks while significantly improving long-horizon tasks. - GPT-DTMA-R degrades in short-term, confirming Markov heads aid short-horizon. - CLIP-DT performs worse than GPT-DT, proving Markov heads are beneficial. ### Baselines - I like that the authors choice of baselines to control pertraining (GPT-DT), architecture (DTMA), and alternative PLMs (CLIP-DT) - I suggest the authors to make a comparison to Decision Convformer (Kim et al., 2024), since the paper also discusses Markov properties of short-horizon offline RL tasks. - It would be great if the authors could make comparisons to value-based offline RL methods (e.g., CQL, IQL) since they outperform Transformer based methods on sub-optimal datasets. Supplementary Material: I read the appendix. I found the following sections especially useful. - Experiment setup in Appendix B. - Theoretical proofs in Appendix D confirming key claims - CLIP-DT analysis in appendix F that further validates the hypothesis. Relation To Broader Scientific Literature: This work is related to broader scientific literature and contributes as follows. - Background: Decision Transformers, PLM-based RL - Novel insight: identifies Markov heads as the mechanism behind PLM gains in RL. - Unique approach: MoA provides adaptive attention, unlike past methods relying on trajectory planning (Trajectory Transformer) or hierarchical policies. - Contribution to NLP: connects transformer interpretability (head specialization) with RL decision-making. Essential References Not Discussed: This paper discusses most related references. One I may suggest is 1. **Trajectory Transformer** Janner et al., Offline Reinforcement Learning as One Big Sequence Modeling Problem, 2021, for long-horizon transformer planning Other Strengths And Weaknesses: ### Strengths - Original: First work identifying Markov heads and their role in RL. - Significant: Improves transformer-based RL adaptivity across different timescales. - Clarity: Good structure; clear motivation and results. - Balanced theory & practice: Strong theoretical foundation + empirical validation. ### Weaknesses - Limited PLM scope: Only tested on GPT-2 small; unclear if Markov heads exist in other PLMs. - Complexity: MoA adds parameters, though minimal overhead. - No direct comparison to RL SOTA: Does not benchmark against CQL/IQL in D4RL. - Minor clarity issues: Typos and vague phrasing (e.g., "unknown-term environments"). Other Comments Or Suggestions: I enjoyed this work, and some minor suggestions that can improve this work, or some follow-up work ideas I can think of are as follows. 1. Test other PLMs to confirm generality. 2. Analyze MoA behavior per time step (how weights change dynamically). 3. Compare GPT-DTMA’s performance against value-based offline RL baselines (CQL/IQL). Questions For Authors: Here are some questions that I found confusing. 1. How was the threshold for Markov heads chosen? 2. Would a manual reduction of Markov head influence improve long-term tasks further? 3. Have you tested Markov heads in other PLMs? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your comprehensive review and constructive comments. We appreciate your acknowledgement regarding the originality, significance, clarity and presentation of our work. Please find our response to suggestions and questions. **S1&Q3. Test other PLMs to confirm generality.** We have tested the existence of Markov heads in other pre-trained large models, such as GPT-J, ImageGPT [1]. We examined the attention heads in GPT-J and found that all of them cannot satisfy the condition (i) in Definition 4.1, i.e. Markov head is not detected. We also test the performance of initializing DT with GPT-J and ImageGPT checkpoints for short-term environments. Table R1 shows the result comparisons among different PLMs. Results show that without Markov heads, the performances of GPTJ-DT and ImageGPT-DT fail to align with GPT-DT. **Table R1**: |Dataset(short-term)|GPT-DT|GPTJ-DT|ImageGPT-DT| |-|-|-|-| |Hopper-m|77.9|72.5|7.3| |Hopper-m-r|77.9|73.8|7.6| |Walker2d-m|77.1|75|1.2| |Walker2d-m-r|74.0|70.3|12.3| [1] Chen, Mark, et al. "Generative pretraining from pixels." *International conference on machine learning*. PMLR, 2020. **S2. Analyze MoA behavior per time step (how weights change dynamically).** We analyzed the step-wise changes of average weights for Markov heads and non-Markov heads. Fig.11 and Fig.12 in [[URL](https://anonymous.4open.science/r/submission9905-7D09/readme.md)] show the results for short-term and long-term environment. We conclude that, for short-term environment, the weights of Markov heads gradually ascent while the weights of non-Markov heads gradually descent. In contrast, for long-term environment, the weights of Markov heads gradually descent while the weights of non-Markov heads ascent. **S3. Compare GPT-DTMA’s performance against value-based offline RL baselines.** We have conducted experiments on value-based offline RL methods (CQL) and Decision Convformer (DC). Table R2 shows results for short-term environments, in which CQL generally perform worse than GPT-DT or GPT-DTMA, and DC outperforms GPT-DTMA. Table R3 shows results for long-term environments, where CQL outperforms other methods and DC performs worst. While the results of DC imply that directly utilizing the Markov property benefits short-term environment performances and hurts long-term performances, they strengthened our claim. **Table R2**: |Dataset(short-term)|CQL|DT|GPT-DT|DC|GPT-DTMA| |-|-|-|-|-|-| |Hopper-m|58.1|67.4|77.9|**79.5**|77| |Hopper-m-r|75.3|74.1|77.9|**82.1**|80.4| |Walker2d-m|72.7|74.3|77.1|79.3|**79.9**| |Walker2d-m-r|78.6|71.9|74.0|**79.1**|77.0| **Table R3**: | Dataset (long-term) | CQL | DT | GPT-DT | DC | GPT-DTMA | | --- | --- | --- | --- | --- | --- | | PointMaze-large | **167.7** | 195.3 | 257.3 | 276 | 203.0 | **S4. Extra related works** Thanks for pointing out. Decision Convformer (DC) explicitly trains convolutions to enhance local attention; Trajectory Transformer employs a different trajectory formation than DT. In contrast with these work, we identify Markov heads in PLM that influences the short-term and long-term planning ability. We will include these discussions in our final version. **Q1. How was the threshold for Markov heads chosen?** We begin by recalling the definition of a Markov matrix as stated in Definition 4.1: all diagonal elements are positive, and the ratio $\frac{\overline{|A_{ii}|}}{\overline{|A_{ij}|}} > r$. Let $\overline{|A_{ii}|} = m$ and $\overline{|A_{ij}|} = n$, then after applying softmax function, the diagonal element becomes $\frac{e^m}{e^m + (d-1) e^n}$, where $d$ is the embedding dimension. To determine a reasonable range for $r$, we assume that the diagonal element $\frac{e^m}{e^m + (d-1) e^n}$ should be at least $0.5$ in order to express higher attention on last input token, i.e., $\frac{e^m}{e^m + (d-1) e^n} > 0.5$. Then we can obtain that $r > \ln(d-1) + 1$. In our setting, $d = 768$ and $r > 7.64$. In remark 4.7, we set $r=20$ to identify Markov heads of GPT-DT. Under this threshold, the corresponding diagonal element is at least 0.99, indicating an extreme focus on last input token. **Q2. Would a manual reduction of Markov head influence improve long-term tasks further?** A manual reduction of Markov head influence is shown to improve performance on long-term tasks. We investigated this approach using Attention with Reverse Linear Biases (ALiBi-R). ALiBi-R introduces a manual reduction of Markov head behavior by penalizing diagonal elements and simultaneously enhancing the weights assigned to distant elements in the attention matrix. **Table R4**: | Dataset (long-term) | DT | GPT-DT | GPT-DTMA | GPT-DT-ALiBi-R | | --- | --- | --- | --- | --- | | PointMaze-large | 195.3 | 257.3 | 203.0 | 216 | For the minor clarity issues, we will revise and address this issues accordingly. Thank you again for your constructive comments!
Summary: Incorporating Pretrained Language Models (PLMs) into Decision Transformers (DTs) has shown promise in the area of offline reinforcement learning (RL). However, it is unclear why the representations obtained from NLP tasks would be beneficial for RL tasks. The authors aim to address this question by analyzing the attention heads of several models and demonstrating that they exhibit Markov properties. Identifying whether these heads are Markov or not is crucial for understanding the limitations of these models in solving short-term and long-term tasks in various environments. The authors perform several experiments in both short- and long-term environments and show the impact of Markov heads on model performance. They also conduct various ablation studies to further support their findings, such as examining the relationship between context length and Markov head weight, and initially down-weighting Markov heads to see how this directly affects performance. Claims And Evidence: The authors claim that PLMs influence performance in offline RL based on Markov heads, which are the key information transferred from PLMs. They argue that this is beneficial only for short-term environments and has a negative impact on long-term environments. The evidence that the authors show is based on empirical observation that supports their theory. The evidence is not entirely convincing; there has been a lot of research on transformer attention regarding the significance of each head and the observation that not all attention heads are necessary. However, this paper seems disconnected from that body of work. Additionally, it is unclear whether the issue the authors are addressing is related to a distribution shift or a modeling problem, as the paper does not clarify how the short- and long-term data splits were determined. It also does not specify whether the model was trained and tested in the same or different environments. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the problem at hand. Theoretical Claims: No, I did not check the correctness of the proofs in the paper. Experimental Designs Or Analyses: The author conducted several experiments and analyses to support the claims in their paper: - Main Results: The main experiments were performed on both short-term tasks (using Mujoco) and long-term tasks (using PointMaze and AntMaze). The authors compared their results to several baselines, which were trained using PLM and from scratch. They analyzed their findings and briefly explained that their proposed algorithm outperforms the others. - Ablation Studies: The primary focus of the authors’ ablation studies was to compare the relationship between context length and Markov head weights in both short-term and long-term tasks. Although they demonstrated that Markov head weights are higher for short-term than long-term tasks, the paper does not include sample ablations for other models, making it difficult to fully understand the differences. Supplementary Material: No Relation To Broader Scientific Literature: The key contribution to the broader scientific literature is understanding when the representations of PLMs will be helpful in solving RL tasks. In particular, the papers suggest that PLMs learn what are referred to as Markov Heads, which RL tasks can take advantage of. However, these Markov Heads may only be adequate for short-term tasks and not for long-term tasks. Therefore, the broader scientific connection is to determine when PLMs will be beneficial for solving RL tasks. Essential References Not Discussed: - MOH: MULTI-HEAD ATTENTION AS MIXTURE-OF-HEAD ATTENTION by Jin et al. 2024 - Are Sixteen Heads Really Better than One? by Michel et al. 2019 Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. Would a MoE (mixture of experts) perform better in these long-term environments? 2. What is the high-level intuition behind a Markov head? Additionally, why is diagonal dominance important in the context of NLP and RL? I'm a bit confused about the significance of having an attention head that exhibits diagonal dominance. I would assume some attention heads display diagonal dominance while others do not, as each attention head focuses on different patterns within the data. 3. How did you use for the train and test splits when training DT and GPT-DT? Did you train and test on different datasets from the same environment, or did you train in one environment and test in another? 4. Not all attention heads are Markov heads. How do the non-Markov heads affect performance in both long-term and short-term environments? In general, what percentage of heads are considered Markov heads? Are Markov heads fixed within a particular environment, or do the heads classified as Markov change depending on the environment? 5. Section 4.2: "In real applications, it is hard to determine whether the planning ability is required by the current environment without prior knowledge". I am uncertain whether the issues discussed stem from distribution shift problems due to changes in the training and testing environments or if they are related to modeling flaws. 6. Why do you need to adaptively control the weight of the attention heads? It does not seem like you are addressing the modeling problem. 7. Do you have a skyline model that shows what good behavior looks like, even if it's achieved through cheating? For example, imagine training a model on certain tasks and then testing it on different tasks. A skyline model would be trained on the test tasks to observe the behavior of the model as if it had seen the training datasets. 8. How does the Markov heads change for DT and GPT-DT similar to the experiments shown in table 5 and table 6? 9. I assume that with an increase in context length, regardless of the model, the models will tend to focus more easily on distant information because there is more content to consider. This seems like a fairly general phenomenon that is not specific to GPT-DTMA. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Please kindly find the response to your concerns below. **Q1. Would a MoE (mixture of experts) perform better in these long-term environments?** It’s interesting to investigate whether MoE perform well in long-term environments, however, MoE is not considered in this work due to the main focus of this work is on unveiling the impact of Markov heads in PLMs that affect offline RL performances. Indeed, Markov heads may exist in each expert of an MoE model, so our findings may also provide insights for explaining the performance of MoE in long-term environments. **Q2. What is the high-level intuition behind a Markov head? Additionally, why is diagonal dominance important in the context of NLP and RL?** The underlying intuition behind Markov heads is that they function as strong local policy learners, relying predominantly on the most recent observation to inform decisions. This behavior mirrors the memoryless property of Markov processes. Diagonal dominance is a critical characteristic, as it ensures that attention is primarily focused on the current input token, thereby reinforcing the Markovian assumption and promoting stable learning in highly reactive environments. While we acknowledge that not all attention heads exhibit diagonal dominance, we demonstrate that this variation reflects the emergence of diverse temporal planning behaviors. Specifically, only a subset of heads learns to follow short-term decision patterns, a phenomenon particularly pertinent in reinforcement learning settings where tasks may exhibit heterogeneous temporal dependencies. **Q3. How did you use for the train and test splits when training DT and GPT-DT? Did you train and test on different datasets from the same environment, or did you train in one environment and test in another?** Following common practice in offline RL research, we train each model with offline datasets collected by D4RL for each environment respectively, and test the model in the same environment. **Q4. How do the non-Markov heads affect performance in both long-term and short-term environments? In general, what percentage of heads are considered Markov heads? Are Markov heads fixed within a particular environment, or do the heads classified as Markov change depending on the environment?** Non-Markov heads show almost equal attention on each tokens, and we can see that the weights of all non-Markov heads become larger in long-term environments, such as PointMaze tasks. During our observations, we found that 3 of 12 heads are Markov heads. The Markov Heads are defined according to Definition 4.1 and 4.2 regardless of environments. According to Theorem 4.5, for GPT-DTs, this Markov Head property are fixed and won’t change by fine-tuning. For DT, the model is trained to obtain or not obtain Markov Heads based whether it’s short-term or long-term environment. **Q5. Is the issue discussed in Section 4.2 stem from distribution shift problems due to changes in the training and testing environments or related to modeling flaws.** We would like to clarify that the issues discussed are neither a distribution shift problem nor related to modeling flaws. We state that when given an new task for a PLM initialized DT model to be fine-tuned on, we may not know the planning ability required (either short-term or long-term) for this task in advance. If long-term planning ability is required, we have proved that the Markov Heads in PLM weights will hurt performance. **Q6. Why do you need to adaptively control the weight of the attention heads?** Due to the issue stated in Section 4.2 and given our statement in Q5, we would like the model to automatically adapt to the planning ability needed. Therefore, GPT-DTMA learns a weight reducing the influence of Markov heads to enhance long-term planning ability of GPT-DTs. **Q7. Do you have a skyline model that shows what good behavior looks like?** While our experiments follow the standard offline RL settings, trainings and testings are conducted on the same task for each model. Therefore, the current results have represented the “good behavior” given each model configuration. **Q8. How does the Markov heads change for DT and GPT-DT as in Table 5/6?** We would like to clarify that Markov head is not observed in DT. Table 5/6 shows the head weights given by the MoA module in GPT-DTMA, therefore they are not available for DT and GPT-DT. **Q9. With an increase in context length would the models tend to focus more easily on distant information?** In Theorem 4.3 and 4.5, we prove that Markov heads persists in GPT-DT during fine-tuning, so it will not tend to focus on distant information natively, even when context length is increased. Therefore, the performance gain brought by MoA to GPT-DTMA is non-trivial and cannot be replaced by simply extending the context length. Thank you again for your reviewing efforts and we sincerely hope our responses have addressed your concerns.
Summary: This paper investigates why pre-trained language models (PLMs) boost Decision Transformer performance in offline RL setting. The authors identify crucial "Markov heads" within PLMs that strongly focus attention on the most recent input state. While beneficial for short-term tasks like MuJoCo, theoretical analysis and experiments show these heads are rigid and cannot be easily changed via fine-tuning, thus hindering performance in long-term planning tasks like Mazes. To address this limitation, the paper introduces GPT-DTMA, which uses a Mixture of Attention mechanism to adaptively weight the influence of different attention heads. This allows the model to dynamically control the impact of Markov heads based on the specific environment's requirements. Results demonstrate that GPT-DTMA outperforms baselines in short-term settings and significantly closes the performance gap in long-term scenarios. Claims And Evidence: The paper does a good job supporting the claims. For experimental evidence, the main issue is the improvement of GPT-DTMA—for most cases in Table 3, the improvement is within the confidence interval, which weakens the validity of the proposed algorithm. Even for long-term scenario in table 4, the case is not that better. Methods And Evaluation Criteria: The proposed methods overall make sense. For the benchmark environment, do you have any other experiments beyond the robot environment or maze environment? They are relatively old environments in the offline RL setting. Theoretical Claims: I didn't check the details but the overall logics seem fine. My main point is that the author should address more on how some of the assumptions/conditions are determined. For example, for more explanation on the physical meaning of definition 4.1 and 4.2 can make it more reader-friendly. Experimental Designs Or Analyses: I have checked the validity of experiments, and most are reasonable, clear. My only issue is the significance of table 3/4. Supplementary Material: I have checked the supplementary material with the environmental setting and it looks reasonable. Relation To Broader Scientific Literature: It is closely related with the LLM and pretraining literature. The author properly discuss its relationship in the related work and preliminary section. Essential References Not Discussed: All references are discussed. Other Strengths And Weaknesses: Strengths: The overall idea is interesting and the demonstration process is quite clear and sound. The proposed Markov Head Concept is novel and properly addressed. weakness: While GPT-DTMA shows improvements over GPT-DT and DT baselines, the paper doesn't extensively compare its absolute performance against the broader state-of-the-art in offline RL methods on these benchmarks (which might include non-Transformer methods or different Transformer variants). The focus is more internal to understanding PLM transfer within the DT framework. Other Comments Or Suggestions: No Questions For Authors: 1. Your analysis focuses primarily on GPT-2. Have you conducted preliminary investigations or do you have hypotheses about whether similar Markov head phenomena exist and exhibit the same stability when using other PLM architectures? 2. The Markov heads are based on a ratio r (in Definition 4.1/4.2 context). How sensitive are your findings – specifically, the number of heads identified as Markov and the overall conclusions about their impact – to the choice of this threshold r? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Please kindly find the response to your concerns below. **W1. The improvement of GPT-DTMA is within the confidence interval.** The experiment are repeated three times to ensure significance. We would also like to emphasize that, one important objective for our experiments is to validate the influence of Markov heads in short-term and long-term environments. Since Markov heads play important role in both GPT-DT and GPT-DTMA, it is reasonable that their performance stay close in short-term environments. However, their performance gap in long-term environment is larger, showing support to our main claim. **W2. The paper doesn't extensively compare its absolute performance against the broader state-of-the-art in offline RL methods on these benchmarks.** We have extended experiments to compare with value-based offline RL methods (CQL) and Decision Convformer in our results. Please refer to Table R2 and Table R3. Table R2 shows results for short-term environments, in which CQL generally perform worse than GPT-DT and GPT-DTMA and DC outperforms GPT-DTMA. Table R3 shows results for long-term environments, where CQL outperforms other methods and DC performs worst. While the results of DC imply that directly utilizing the Markov property benefits short-term environment performances and hurts long-term performances, they strengthened our claim. **Table R2**: | Dataset (short-term) | CQL | DT | GPT-DT | DC | GPT-DTMA | | --- | --- | --- | --- | --- | --- | | Hopper-m | 58.1 | 67.4 | 77.9 | **79.5** | 77 | | Hopper-m-r | 75.3 | 74.1 | 77.9 | **82.1** | 80.4 | | Walker2d-m | 72.7 | 74.3 | 77.1 | 79.3 | **79.9** | | Walker2d-m-r | 78.6 | 71.9 | 74.0 | **79.1** | 77.0 | **Table R3**: | Dataset (long-term) | CQL | DT | GPT-DT | DC | GPT-DTMA | | --- | --- | --- | --- | --- | --- | | PointMaze-large | **167.7** | 195.3 | 257.3 | 276 | 203.0 | **Q1. Your analysis focuses primarily on GPT-2. Have you conducted preliminary investigations or do you have hypotheses about whether similar Markov head phenomena exist and exhibit the same stability when using other PLM architectures?** We have tested the existence of Markov heads in other pre-trained large models, such as GPT-J, ImageGPT [1]. We examined the attention heads in GPT-J and found that all of them cannot satisfy the condition (i) in Definition 4.1, i.e. Markov head is not detected. We also test the performance of initializing DT with GPT-J and ImageGPT checkpoints for short-term environments. Table R1 shows the result comparisons among different PLMs. Results show that without Markov heads, the performances of GPTJ-DT and ImageGPT-DT fail to align with GPT-DT. [1] Chen, Mark, et al. "Generative pretraining from pixels." International conference on machine learning. PMLR, 2020. **Table R1**: | Dataset (short-term) | GPT-DT | GPTJ-DT | ImageGPT-DT | | --- | --- | --- | --- | | Hopper-m | 77.9 | 72.5 | 7.3 | | Hopper-m-r | 77.9 | 73.8 | 7.6 | | Walker2d-m | 77.1 | 75 | 1.2 | | Walker2d-m-r | 74.0 | 70.3 | 12.3 | **Q2. The Markov heads are based on a ratio r (in Definition 4.1/4.2 context). How sensitive are your findings – specifically, the number of heads identified as Markov and the overall conclusions about their impact – to the choice of this threshold r?** We begin by recalling the definition of a Markov matrix as stated in Definition 4.1: all diagonal elements are positive, and the ratio $\frac{\overline{|A_{ii}|}}{\overline{|A_{ij}|}} > r$. Let $\overline{|A_{ii}|} = m$ and $\overline{|A_{ij}|} = n$, then after applying softmax function, the diagonal element becomes $\frac{e^m}{e^m + (d-1) e^n}$, where $d$ is the embedding dimension. To determine a reasonable range for $r$, we assume that the diagonal element $\frac{e^m}{e^m + (d-1) e^n}$ should be at least $0.5$ in order to express higher attention on last input token, i.e., $\frac{e^m}{e^m + (d-1) e^n} > 0.5$. Then we can obtain that $r > \ln(d-1) + 1$. In our setting, $d = 768$ and $r > 7.64$. In remark 4.7, we set $r=20$ to identify Markov heads of GPT-DT. Under this threshold, the corresponding diagonal element is at least 0.99, indicating an extreme focus on last input token. From Table 1, we can see that there are three Markov heads before/after fine-tuning for any $r\in(20,+\infty)$. Thank again for your thoughtful reviews. We will incorporate the discussions in the revised version and we are looking forward to further discussions.
null
null
null
null
null
null
PEAKS: Selecting Key Training Examples Incrementally via Prediction Error Anchored by Kernel Similarity
Accept (poster)
Summary: The paper introduces an algorithm for Incremental Data Selection (IDS) that selects training examples from a continuous data stream by combining prediction error and kernel similarity. The problem is important for the machine learning community. IDS addresses the challenge of efficient data utilization in deep learning, particularly in scenarios where full datasets are unavailable upfront. PEAKS dynamically balances samples with high prediction errors (indicating model uncertainty) and those aligned with class prototypes (via kernel similarity). Experiments on image datasets (CIFAR100, Food101, WebVision) demonstrate PEAKS' superiority over baselines like EL2N and GraNd, especially in low-data regimes. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The paper introduces an algorithm for Incremental Data Selection (IDS) that selects training examples from a continuous data stream by combining prediction error and kernel similarity. The problem is important for the machine learning community. IDS addresses the challenge of efficient data utilization in deep learning, particularly in scenarios where full datasets are unavailable upfront. PEAKS dynamically balances samples with high prediction errors (indicating model uncertainty) and those aligned with class prototypes (via kernel similarity). Experiments on image datasets (CIFAR100, Food101, WebVision) demonstrate PEAKS' superiority over baselines like EL2N and GraNd, especially in low-data regimes. Essential References Not Discussed: [1] Cost-Effective training of deep CNNs with active model adaptation. KDD 2018 See weaknesses. Other Strengths And Weaknesses: # Strengths 1. The problem of IDS is important for the machine learning community. 2. The proposed method employs a measure that considers two core factors. 3. The proposed method is evaluated on several vision datasets against other baselines. # Weaknesses 1. The core idea of PEAKS combines prediction error and kernel similarity. While the integration is practical, the theoretical contribution is incremental rather than groundbreaking, lacking a novel framework. Besides, the proposed method shares concepts in the active learning field [1], while proper discussions are not included in this paper. 2. The experiments are restricted to image classification tasks. There is no validation on non-vision domains (e.g., NLP, time-series) or regression tasks, raising concerns about generalizability. 3. While PEAKS is framed as efficient, computational overhead from kernel similarity calculations, cache maintenance, and dynamic thresholds is not quantified. This omission leaves scalability for large-scale models/datasets unclear. 4. PEAKS-V relies on a validation set for class prototypes, which is often impractical in streaming scenarios. The validation-free PEAKS variant underperforms other baselines, suggesting robustness issues in real-world deployments. Besides, the authors only report the average accuracy while ignoring the variance in the Tables, making it hard to understand the robustness. 5. The paper does not address how sensitive PEAKS is to the choices on hyperparameters like selection rate and refresh period, limiting reproducibility and adaptability to diverse settings. 6. When using pre-trained models as the initialization, another concern also emerges, i.e., the is overlapping between the pre-training dataset and the downstream tasks. If the pre-trained model has already known them, why bother conducting further learning processes? [1] Cost-Effective training of deep CNNs with active model adaptation. KDD 2018 Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are thankful to the reviewer for their detailed assessment and helpful suggestions. **Weakness-1** We acknowledge that our theoretical contribution builds incrementally on existing frameworks rather than proposing an entirely new theoretical foundation. Our primary contributions are twofold. First, we formulate the Incremental Data Selection (IDS) problem — a practical, previously unexplored setting that addresses real-world constraints in data selection. Second, we devise the PEAKS algorithm tailored for the IDS. We also thank the reviewer for highlighting the connection to AL. We currently have a brief discussion of AL in Appendix C and will move this to the main paper, expanding it to include the suggested reference. **Weakness-2** We selected image classification as a testbed for IDS as it is a prevalent and well-established task in the data selection literature (e.g., [1, 2]). While PEAKS' design is not specific to the image domain, we acknowledge that the absence of empirical evaluation on other tasks is a limitation. We commit to adding a discussion of this limitation in the revised paper. [1] Beyond neural scaling laws: beating power law scaling via data pruning, NeurIPS 2022. [2] Deep Learning on a Data Diet: Finding Important Examples Early in Training, NeurIPS 2021. **Weakness-3** We thank the reviewer for raising this point and will clarify these details in the paper. Due to the approximations in Section 3, kernel similarity in PEAKS reduces to the product of the error and output logit (Eq.11), incurring no extra cost. Regarding cache and thresholding (which are not specific to PEAKS but inherent to the IDS), the required overhead is minimal. 1. The cache stores only $\tau \times \delta$ floating-point values. For example, in the Section 5.2 experiments, the smallest cache stores 200 values (0.78 KB), while the largest (WebVision, 100k budget) stores 9000 values (35 KB). 2. Dynamic thresholding requires maintaining a sorted cache for percentile computation. These overheads are negligible compared to the training cost. **Weakness-4 (performance of PEAKS)** We would like to respectfully clarify that the validation-free PEAKS variant consistently demonstrates strong performance across experiments. As demonstrated in Figure 3, Table 2, and Figure 4, the validation-free PEAKS consistently outperforms all baselines across datasets and budgets, performing slightly worse only on the simpler CIFAR100 dataset. Notably, on the challenging real-world WebVision dataset, PEAKS significantly outperforms the closest baselines by margins of 4.9%, 5.2%, and 4.2% across the three evaluated data budgets. This demonstrates that PEAKS is the most robust among baselines for real-world scenarios. **Weakness-4 (accuracy variance)** Thank you for this important point about reporting statistical variance. Due to space limitations in this response, we cannot present tables here. However, we can confirm that the accuracy variance is within an acceptable range compared to the performance differences between baselines. The complete variance data for all experiments will be included in the revised manuscript. **Weakness-5** We address selection rate sensitivity in Section 5.3, where Figure 4 shows PEAKS consistently outperforms other methods across selection rates ranging from 10% to 90%. Regarding refresh period, we wouldn't expect methods to be highly sensitive to this parameter as it only determines how many recently seen samples are considered for modeling the score distribution for percentile selection. *We also conducted an additional ablation study* varying $\tau$ from 50 to 300 (default was 100) on two datasets following the setting in Section 5.3. As the table below shows, the results remain consistent across different $\tau$ values. |Method|F101 (τ=50)|F101 (τ=200)|F101 (τ=300)|F101-N (τ=50)|F101-N (τ=200)|F01-N (τ=300)| |---|---|---|---|---|---|---| |PEAKS|**73.0 (±1.4)**|**72.8 (±1.3)**|**72.3 (±1.1)**|**62.0 (±2.6)**|**62.4 (±2.5)**|**61.9 (±2.4)**| |Moderate|70.4 (±1.3)|70.6 (±1.2)|71.1 (±1.2)|61.8 (±2.7)|62.2 (±3.4)|61.8 (±2.9)| |EL2N |66.3 (±1.4)|66.8 (±1.5)|67.5 (±1.8)|44.4 (±2.3)| 45.4 (±2.5)|45.4 (±2.7)| |Uncertainty|71.3 (±1.4)|71.8 (±1.9)|72.1 (±1.8)|54.6 (±2.7)|55.3 (±2.2)|55.7 (±3.0)| **Weakness-6** In Section 3, we assume that after the initialization phase (training on a few random samples), our pre-trained model acquires "decent" performance. We will clarify in the revised paper that this is far from satisfactory performance. To illustrate this clearly, we provide *additional results* below, comparing the accuracy of the pre-trained model after the initialization phase, and after IDS using PEAKS (budget x4). As shown, the model's performance after initialization does not approach the performance we report after IDS |Dataset|After Init|After IDS| |---|---|---| |C100|41.7 ±4.0|82.7 ±1.1| |F101|37.8 ±1.1|72.9 ±1.4| |F101-N|24.9 ±2.0|62.2 ±3.3| |WebVision|26.9 ±3.9|59.0 ±0.4|
Summary: This work introduces Incremental Data Selection (IDS) and proposes PEAKS, a method that selects training samples based on prediction error and kernel similarity. PEAKS efficiently builds training datasets while improving model performance. Experiments show it outperforms existing methods, significantly reducing data needs while maintaining accuracy, especially on large-scale datasets like WebVision. Claims And Evidence: This paper supports its claims clearly with a combination of mathematical derivations and detailed experimental setups. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are meaningful for incremental data selection. PEAKS is well-suited for streaming data, and benchmark datasets ensure comprehensive evaluation. Theoretical Claims: The paper does not involve highly complex mathematical derivations. The presented formulas are correctly applied and support the method’s claims. No issues were found with their usage. Experimental Designs Or Analyses: The experimental design appears detailed and supports the reliability of the conclusions. Supplementary Material: I briefly reviewed the supplementary material, which primarily consists of experimental details. Relation To Broader Scientific Literature: PEAK combines similarity and uncertainty, offering an innovative approach not widely explored in existing methods, advancing related research. Essential References Not Discussed: There are no essential references missing in the paper. Other Strengths And Weaknesses: Strengths: 1: The idea is explained clearly and is also innovative. 2: The experiments are thorough, making the conclusions highly convincing. Other Comments Or Suggestions: None. Questions For Authors: The Neural Tangent Kernel seems to be applicable only to infinitely wide networks. Are there alternative methods for other networks? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's supportive and positive evaluation of our work. **The Neural Tangent Kernel seems to be applicable only to infinitely wide networks. Are there alternative methods for other networks?** The reviewer raises an important clarification point regarding the Neural Tangent Kernel (NTK) being theoretically guaranteed only for infinitely wide networks. We acknowledge that our networks do not operate in the strict NTK regime. However, based on intuition that in the model fine-tuning phase, weight changes happen less dramatically, our approach uses a first-order Taylor approximation as a practical tool to model network behavior between consecutive updates, and is not claiming the network exists in the kernel regime. [1] also used first-order approximations to derive methods and shed insight for practical networks like ours. Our empirical results also support this position. [1] Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning, NeurIPS 2024
Summary: This paper poses Incremental Data Selection (IDS) problem where examples arrive continuously during training. Then it proposes a prediction error-based method PEAKS to address the problem, showing that a sample’s impact is influenced by both its position in feature space and its prediction error. Experimental results demonstrate the effectiveness of proposed method compared to naive baselines (such as random selection and Moderate). Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: Fair Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths 1. Incremental Data Selection problem is interesting and seems to be important in real world applications. 2. The paper is well organized and written. Weakness 1. It is unclear why and how the proposed method (PEAKS) is better than other baselines in IDS setting. The experimental results have verified the effectiveness of PEAKS, while further explanation and demonstration is needed. 2. How the proposed method performs on standard offline data selection setting, can it still outperforms other baselines? 3. The experimental setup is not aligned with existing works. For classification like CIFAR-100, a standard setup is SGD optimizer with a momentum of 0.9, weight decay of 5e-4, and an initial learning rate of 0.1. While this work adopts adamw optimizer, which achieves worse performance than SGD on classification datasets. It remains unclear if the proposed method performs other baselines when using SGD optimizer. 4. Lack of experiments on large-scale real-world datasets like ImageNet-1K. 5. This paper focuses on online setting and the authors argue that the scenario naturally extends to continual learning when the input distribution evolves over time. Therefore, it is helpful to conduct experiments on online continual learning benchmarks to demonstrate the performance, e.g., can PEAKS enhances the ER [1] and DER++ [2] by replacing random selection with PEAKS. 6. The reviewer suggests a simple baseline: selecting training samples which are wrongly predicted but with lower confidences (maximum posterior probability). [1] Learning to learn without forgetting by maximizing transfer and minimizing interference. [2] Dark Experience for General Continual Learning: a Strong, Simple Baseline. Other Comments Or Suggestions: The reviewer suggests that the authors add a summary of the paper's contributions at the end of the introduction. This summary would help emphasize the key points and guide readers through the rest of the manuscript. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough feedback. **Weakness-1** We thank the reviewer for pointing out that the draft lacks discussion of why PEAKS is effective. In the revised manuscript, we will explicitly clarify this. We believe PEAKS' main advantage is its ability to discriminate hard examples from outliers and noise. By combining two scores, it effectively picks typical but hard examples. In contrast, error-based methods like EL2N and GraNd purely focus on high-error samples that might be mislabeled. Moderate selection is conservative, filtering out noise but also ignoring valuable examples. Uncertainty is an unsupervised metric that does not exploit label information, making it immune to label noise but also limiting the information it can leverage. **Weakness-2** *We ran some additional experiments in the offline setting.* We first trained a ResNet-18 on the full Food101-N dataset (our largest dataset for training from scratch experiments, focusing on one dataset due to time limitations). Second, we used PEAKS and other baselines to rank the full dataset based on scores. Finally, we trained a model from scratch using 50%, 60%, 70%, and 80% of the top samples. As seen from the results below (across 3 seeds), PEAKS performs best across most of the data budgets. ||50%|60%|70%|80%| |---|---|---|---|---| |Random|70.1 (±0.1)|71.1 (±0.1)|72.4 (±0.1)|73.6 (±0.1)| |Moderate|**71.0** (±2.0)|72.2 (±1.6)|73.6 (±1.1)|74.9 (±0.9)| |EL2N|64.9 (±4.9)|69.2 (±2.3)|71.9 (±0.9)|73.8 (±0.2)| |Uncertainty|66.5 (±3.5)|69.7 (±1.1)|71.9 (±0.2)|73.5 (±0.3)| |PEAKS|70.5 (±1.5)|**72.3** (±1.1)|**74.1** (±1.3)|**76.0** (±0.1)| While these preliminary results are promising, we want to emphasize that PEAKS was designed for the incremental setting, where an example's value is tied to the current model state. Offline data selection has different constraints and opportunities, such as the ability to globally optimize selection across the entire dataset. **Weakness-3** We ran a small grid search to select the optimizer, lr, and weight decay that optimized performance during the initial training phase (Appendix B). This approach was motivated by two key considerations: 1. In realistic streaming scenarios, hyperparameters must be selected based on early performance signals rather than retrospectively after seeing final results. 2. The initial phase is method-independent. This ensures a fair comparison where no method is advantaged by the optimizer choice. We found AdamW to perform better in our ResNet-18 and WebVision experiments, likely due to differences from the typical epoch-based full-dataset training. For ViT experiments, we used SGD with momentum (lr 0.001), which aligns with lr options explored by the original ViT authors. Notably, they do not consider a high lr such as 0.1 for fine-tuning (see Appendix Table 4 of [1]). Thus, PEAKS demonstrates strong performance with both AdamW and SGD. [1] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, ICLR 2021 **Weakness-4** We would like to highlight that the WebVision dataset contains 2.4 million images across 1000 classes, making it larger than ImageNet-1K. Furthermore, PEAKS shows the most promising results on this large real-world dataset. As explained in Appendix B.5, we deliberately avoided datasets closely related to ImageNet to prevent a potential bias, since ImageNet serves as our pretraining source. **Weakness-5** We thank the reviewer for this insightful suggestion. Studying IDS under distribution shift would indeed be interesting. However, we would like to clarify that we propose IDS as a problem setting, where data arrives incrementally but from a relatively stable distribution. While we noted that the scenario extends to continual learning (CL), this was to acknowledge the relationship rather than claim PEAKS would directly transfer to that setting without modification. PEAKS was designed to select samples that lead to maximum change in logits. However, such samples may not necessarily be good for avoiding catastrophic forgetting or providing knowledge transfer, which are crucial for CL. CL would likely need different considerations. Extending our work to CL would be a significant research direction requiring substantial modifications. We've noted this as important future work. **Weakness-6** Thank you for suggesting this baseline. *We implemented it exactly as described*: by assigning a score of 0 to correctly predicted examples and a score of 1 - max(softmax) to incorrect ones. Results below replicate our setting from Section 5.2. ||C100|F101|F101-N|WebVision| |---|---|---|---|---| |Data x1|62.7 (±7.0)|48.7 (±3.1)|35.0 (±3.7)|35.2 (±0.4)| |Data x2|77.0 (±2.8)|60.3 (±2.9)|44.8 (±3.7)|40.8 (±0.1)| |Data x4|83.9 (±1.0)|70.7 (±2.3)|53.5 (±3.8)|45.3 (±0.3)| Compared with Table-2 in our paper, this baseline performs strongly on CIFAR100, on par with GraNd. However, on the three larger datasets, it significantly lags behind PEAKS.
Summary: This paper focuses on data selection of DNNs that data arrives as a continuous stream and must be selected without access to the full data source. Based on this, the incremental data selection (IDS) problem is formulated as a three-stage process which including initialization with random samples, (streaming) data selection with model update and final training. To resolve IDS problem, this paper analyzes the impact of new data on model and proposes a score function based on the prediction error and penultimate representation similarity for measuring data importance. As a result, a new algorithm PEAKS is proposed. Experiments on CIFAR-100, FOOD101, Webvision demonstrate the effectiveness of PEAKS compared to random selection, embedding similarity based method, EL2N and GraNd. Claims And Evidence: The claims about IDS and PEAKS are clear and supported by convincing evidence. Methods And Evaluation Criteria: The proposed method (i.e., PEAKS) makes sense but the evaluation can be further improved to compare their empirical time complexity. Theoretical Claims: I have checked the theoretical analysis in section 3. The derivation is correct but the claim is not well supported. Approximations are introduced, including replacing the Jacobian w.r.t. the network parameters with parameters only in the last layer and replacing the mean embeddings of validation data with the weight vector. However, there lacks analysis of how close the approximated scoring function (i.e., Eq. 7 or Eq. 11) compares to the original one (Eq. 3). Experimental Designs Or Analyses: Lack of analysis of time complexity Supplementary Material: I have read the appendix but not read the code Relation To Broader Scientific Literature: The idea of incremental data selection is also related to online/continual learning Essential References Not Discussed: To the best of my knowledge, essential references are included Other Strengths And Weaknesses: Strength: 1. Incremental data selection is a novel setting for data efficient learning 2. The paper is well structured and easy to follow Weakness: 1. IDS still needs to finetune the model on the selected data subset, which does not follow a purely streaming setting. 2. Lack of experimental evidence for the rationality of approximation in deriving scoring function 3. Lack of analysis about the empirical time complexity w.r.t. IDS and PEAKS Other Comments Or Suggestions: None Questions For Authors: My concerns are listed in the weakness section Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their time and constructive comments. **Lack of analysis about the empirical time complexity** We thank the reviewer for highlighting this concern. We will clarify in the main text that all baselines are subject to similar time complexity constraints. All methods require only a forward pass to make selection decisions—with GraNd being the only exception, as it requires an additional backward pass. Furthermore, we fixed the selection rate of IDS constant for all baselines (e.g., 20\% in main experiments). Thus, the sample acquisition speed is consistent across all baselines. **There lacks analysis of how close the approximated scoring function (i.e., Eq.7 or Eq.11) compares to the original one (Eq.3).** We agree with the reviewer's concern and have conducted an additional analysis to address this point. Our intuition for using last layer weight updates as an indicator for the whole network is supported by several studies in feature learning/transfer learning: early layers are more transferable [1, 2]; with pre-training, shallower layers approach optimality faster [3]; and early layers undergo less structural change than later ones during training [4]. Therefore, late layers in comparison are subjected to a lot of changes, thus can be representative of the whole network changes. To quantify the amount of change analytically is out of the scope of this paper. Additionally, note that explicitly computing Eq.3 requires calculating the Jacobian for every validation sample, which is of size (number of parameters × number of classes). Due to computational constraints, *we performed the following analysis on a toy MNIST scenario* rather than on large datasets and the ViT architecture. We first trained a two-hidden-layer MLP with 400 neurons each on 5k MNIST examples for 10 epochs. This step replicates a pretrained model that was trained on a few examples at the initialization phase to achieve some performance on the underlying task. Next, we held out 500 examples for validation and considered another 500 samples for selection using 3 methods: 1. Computing Eq.3 exactly. 2. Eq.9 (last layer training assumption). 3. Eq.11 (last layer training assumption and avoiding validation set). We used a 20% selection following our setting in the paper. Across 10 runs with different seeds, we found high overlap between selected subsets: $\mathbf{98 (\pm 1.4)}$\% between Eq.3 and Eq.9, and $\mathbf{92.9 (\pm 0.9)}$\% between Eq.3 and Eq.11. This demonstrates that although we make some simplifying assumptions, our approximate selection mechanisms closely match decisions with exact computation. We are committed to expanding this analysis and scaling it to more realistic networks and datasets such as CIFAR10 and ResNet-18 in the revised paper as computational resources permit. [1] How transferable are features in deep neural networks?, NeurIPS 2014. [2] What is being transferred in transfer learning?, NeurIPS 2020. [3] Which Layer is Learning Faster? A Systematic Exploration of Layer-wise Convergence Rate for Deep Neural Networks, ICLR 2023. [4] Unveiling the Dynamics of Transfer Learning Representations, ICLR 2024 Workshop Re-Align. **IDS still needs to finetune the model on the selected data subset, which does not follow a purely streaming setting.** We would like to clarify the goal of this final training phase and believe that in our writing, calling it "fine-tuning" might have led to misrepresentation. This final training phase is optional and we added it to ensure that the model also has a chance to iterate a few times over samples selected toward the end of data selection so that final accuracy is representative of the corresponding dataset size. However, this only provides an incremental improvement in performance. Below we report the performance of PEAKS before this final training and after. As seen from these results, PEAKS can also work on purely streaming performance with a minimal accuracy drop due to recently selected examples not yet being learned. | **Dataset** | **Before Final Training** | **After Final Training** | |----------------------|:---------------------------:|:--------------------------:| | CIFAR100 (×1) | 57.56 | 58.99 | | CIFAR100 (×2) | 70.03| 72.73| | CIFAR100 (×4) | 80.71| 82.74| | Food101 (×1) | 49.88| 50.85 | | Food101 (×2) | 60.55| 62.64 | | Food101 (×4) | 69.39| 72.88| | Food101N (×1) | 40.71| 41.42| | Food101N (×2) | 50.92 | 52.63| | Food101N (×4) | 59.16| 62.16| | WebVision (×1) | 37.75| 43.92| | WebVision (×2) | 47.33 | 53.11| | WebVision (×4) | 54.53| 59.02| *Average over three seeds.*
null
null
null
null
null
null
Cross-regularization: Adaptive Model Complexity through Validation Gradients
Accept (poster)
Summary: A method for alternating optimization of regular parameters $\theta$ and regularization hyperparameters $\rho$ is presented: the training data is split into training and regularization sets, and $\theta$ and $\rho$ are alternately fit using each split respectively. The method is proven to converge for convex minimums and around a neighbourhood of non-convex minima, and has convergence in $\rho$ at a rate proportional to the square root of the number of regularization parameters. Empirical results in the following applications show efficacy: 1) $L^2$, $L^1$, and spline fitting regularization for convex problems, 2) adaptive noise regularization of neural networks, 3) other effects for neural networks including uncertainty calibration, data growth, and adaptive augmentation. Claims And Evidence: Claims organized by section: 4. The proposed method converges under various conditions - yes, but limited in scope and with some missing caveats (see "Theoretical claims" below). 5. The proposed method is equivalent to/can replace norm-based regularization for convex problems - yes. 6. The proposed method can regularize neural networks via additive noise - yes, but with some caveats (see Experimental Design or Analyses" below). 7. Automatic uncertainty calibration - yes but not entirely convincing, as a baseline comparison with post-hoc uncertainty calibration methods would be useful here to determine whether adaptive noise is competitive in this area. 8. Data growth - not entirely convincing. Here, a baseline versus some standard continual learning method is needed (e.g. cosine annealing LR schedule). 9. Adaptive data augmentation - yes to a limited extent, as methods and limitations need elaboration. Methods And Evaluation Criteria: Section 9: how does one backpropagate through data augmentation parameters? Presumably only continuous parameters would work, as opposed to e.g. random horizontal flips with probability $p$. Also, there are some scenarios in which I am not sure this method would converge optimally (see "Questions"). Theoretical Claims: Section 4: the theoretical claims are sensible. There is a major unstated issue however, which is that one must assume $\theta$ and $\rho$ are independent. It is clear in section 5 that $|\theta| = 1$ is necessary for $L^2$ regularization to converge (otherwise $\theta$ could grow by $\alpha$ and $\rho$ by $\alpha^{-1}$ without effect). But in the neural network tasks, it is not clear that this is the case - requiring either discussion or a restriction of the results in section 4 to the tasks of section 5. Here is a contrived example of the issue for adaptive noise: if the optimal $\theta$ on $D_{train}$ is unbounded in the direction of $v$ but the optimal $\theta$ on $D_{reg}$ points in the opposite direction, then wouldn't the noise regularization optimize towards $N(\alpha v, \alpha)$? for $\alpha \to \infty$? A minor issue is that theorem 4.2 says nothing about the size of the neighbourhood which has convex approximation. This is not a serious problem in practice (optimization on non-convex neural networks works just fine!) but the theorem says very little in its current form. One could remove it, or elaborate e.g. by relating the neighbourhood radii to some measurable quantity (e.g. network Lipschitz bound). Experimental Designs Or Analyses: Section 6: in the case of adaptive noise for neural networks, there is an untested possibility that if the optimum for added noise is unbounded, the method would not actually be converging in $\rho$. This alternative hypothesis could be tested by running training for much longer and seeing if performance drops as noise increases further. Of course early stopping would avoid this potential issue, but early stopping would also add a hyperparameter for the train/test generalization tradeoff (which this method aims to avoid). $L^2$ regularization is common for neural networks - why not analyze this regularization? This would solve many of the issues with the neural network experiments and provide a solid connection with sections 4-5. Supplementary Material: I have skimmed sections A (proofs), C (neural network details), and E-F (ResNet and parameter sensitivity). Relation To Broader Scientific Literature: The method, in addition to advancing regularization methodology (over Jaderberg et al. 2017, Gal et al. 2017, Molchanov et al. 2017, etc.), also provides an interesting tool for probing the capacity of neural networks (Bartlett et al. 2017, Zhang et al. 2021). The results in figure 2 and 4 corroborate existing literature on 1) over-parameterization in later layers of VGG networks, 2) avoidance of plasticity loss and overfitting in continual learning, and 3) evolution of training towards overfitting which is combatted by data augmentation. Essential References Not Discussed: The high degree of noise in figure 2, as well as its concentration in the later layers of a VGG-16 trained on CIFAR-10, strongly suggests a connection with neural network pruning, where similar results have been found. In particular, the Lottery Ticket Hypothesis (Frankle & Carbin 2018). showed that sparsity rates near 98% are possible for VGG-16, with most of the sparsity concentrated in the last layers. Thus, this result is not as surprising in relation to prior work. Other Strengths And Weaknesses: Strengths: the method looks to be quite promising and useful in a large variety of ML and DL applications. Automating hyperparameter selection and doing so adaptively (during training) is a huge improvement. There are non-trivial computational costs to using this method as it requires more training/evaluation iterations, as well as more data to effectuate a train/regularization split, but based on the parameter sensitivity analysis (section F) they are within a small number of multiples of regular training (keeping in mind that regular hyperparameter tuning would require multiple runs also). Weaknesses: the theory in section 4 may not apply to the neural network tasks, and there are some other issues about the neural network tasks. The method is not quite a drop-in improvement as it requires rethinking parameterization, and in some cases, backpropagation through additional operations like image transformations. ## Update after rebuttal The authors have addressed my comments and I maintain my recommendation for acceptance. Other Comments Or Suggestions: Some figures are too small to be legible - figures 3 and 4 especially. Questions For Authors: Section 9 (data augmentation): could the authors discuss the possibility of the following failure modes? - transformation removes a relevant feature (e.g. 180 degree rotation causes 9 and 6 to be indistinguishable) at the same time there is a strong spurious feature (e.g. sky background for "airplane" images), causing the model to overfit the latter. - transformation makes validation perform worse, so that the optimum becomes no transformation. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your insightful review and constructive technical feedback. Your points have significantly improved the paper's theoretical foundations and connections to existing literature. ### Parameter Independence and Bounded Coupling You correctly identified that our theory requires some degree of parameter independence. Rather than complete independence, our approach requires bounded coupling: $$\|H_{\theta\rho}\| \leq \beta\sqrt{\lambda_{\min}(H_{\theta\theta})\lambda_{\min}(H_{\rho\rho})}$$ This condition prevents contradictory optimization objectives from causing instability when training and regularization gradients conflict. Our empirical stability (see extended simulations: https://tinyurl.com/2s45b236) suggests this condition is satisfied in practice, even for neural networks where the noise parameters serve a fundamentally different functional role than weight parameters. ### Neural Network Theory We've addressed your concern about local convergence with two theoretical advances: 1. A refined local structure theorem (https://tinyurl.com/33d2sye4) that establishes precise bounds for the radius of convergence: $r = \min\left(\frac{\mu}{6L_H}, \frac{(1-\gamma)\mu}{2|H|}\right)$. 2. We've developed a new result showing convergence under practical assumptions for neural networks (https://tinyurl.com/4ckfb9ec) that doesn't require global convexity. Our guarantees now only require that: - The model parameters converge for fixed regularization - The validation loss has local convexity in regularization parameters - The gradient coupling satisfies a Lipschitz condition This theorem establishes that cross-regularization converges for neural networks under assumptions that are consistent with observed empirical behavior, without requiring guarantees of global convergence for the underlying neural network optimization problem. On the choice of stochastic regularization, we also examined L2 regularization but found minimal empirical effect in our neural network architectures. This is expected because L2 regularization primarily constrains weight magnitudes, which are effectively cancelled by the normalization layers present in our networks. ### Connection to Lottery Ticket Hypothesis Your observation connecting our noise-based regularization to the Lottery Ticket Hypothesis enriches our framework. We've quantified this relationship: our noise levels (σ≈13) create an information bottleneck with capacity C≈0.004 bits/symbol, mathematically equivalent to pruning with sparsity ≈99.4%, aligning with LTH's 98%. This connection validates our results by demonstrating that two distinct approaches—gradient-based noise adaptation versus iterative weight pruning—converge to similar architecture-specific patterns. In light of this, we will revise our claims about the novelty of the high noise regime, recognizing we're capturing intrinsic properties of neural information capacity rather than method-specific artifacts. ### Backpropagation Through Augmentation For data augmentation, your concern about optimization potentially minimizing all transformations is insightful. Our solution: 1. We parameterize transformations with continuous magnitude parameters (e.g., rotation angle α determining range U[-α,α]) 2. During training, we apply single random transformations for regularization effect 3. During validation, we average predictions over multiple transformations: $\mathcal{L}_\text{val} = \mathbb{E}_{\text{transformations}}[\mathcal{L}(f(\text{transform}(x)))]$ This creates an optimization equilibrium where validation performance constrains transformation magnitude: excessive transformations that remove discriminative features increase validation loss, while insufficient transformations lead to overfitting. The validation gradients naturally balance these competing factors. Our results confirm this behavior: rotation parameters converge to ~3° for SVHN while shear transformations are optimized toward zero. The method identifies which transformations provide regularization benefits for a specific dataset and minimizes those that don't. For discrete transformations like horizontal flips, we agree this remains a limitation without relaxation techniques, which we're exploring in follow-up work. ### Additional Baselines Based on your feedback, we're implementing: - Temperature scaling and deep ensembles for calibration comparisons - EWC for continual learning benchmarks Our preliminary calibration results (https://tinyurl.com/3mtacsu9) show considerable improvements over post-hoc methods, supporting our claim that uncertainty calibration emerges naturally in our approach. Thank you for your thoughtful review. The connection to pruning literature and your theoretical insights have improved both the paper and our analysis of cross-regularization's properties.
Summary: This paper designs an approach to tune model regularization parameters automatically. Instead of relying on cross-validation, which requires training multiple models, the proposed method adapts regularization parameters dynamically by using validation gradients during training. This approach alternates between feature learning (optimizing model parameters using training data) and complexity control (optimizing regularization parameters using validation data). This work proves convergence to cross-validation optima and shows that it is applicable across different types of regularization, including norm-based penalties, noise injection in neural networks, and data augmentation. Experimental results demonstrate its effectiveness in discovering architecture-specific regularization patterns, improving uncertainty calibration, and adapting to growing datasets. Claims And Evidence: - This work proposes an algorithm that alternates between updating model parameters via SGD and updating regularization parameters. Yet, in the formulation (Equation 1), the regularization parameters are defined as a function $\rho$. How does the algorithm apply the gradient descent on the function $\rho$ in Equation 5. - The proposed method applies to differentiable functions with regard to $\rho$. How can the algorithm extend to a broad range of regularization methods, such as mixup, label smoothing, and sharpness-aware minimization? Methods And Evaluation Criteria: - The method is close to bilevel optimization algorithms using second-order gradients. How does the method compare to these methods? For example, "Weighted Training for Cross-Task Learning" by Chen et al. 2021. - It would be better to summarize the experimental setup of each section, to better understand the experimental settings. Theoretical Claims: The paper establishes four theoretical results: (1) alternating updates for model and regularization parameters converge linearly, (2) local optimization guarantees stability through smooth loss landscapes, (3) statistical error scales with the number of regularization parameters rather than model parametersy, and (4) cross-regularization achieves performance equivalent to optimal cross-validation. - Yet, how the results extend to non-convex settings, such as optimizing deep neural networks, is unclear. Experimental Designs Or Analyses: Please see Methods and Evaluation Criteria. Supplementary Material: I have read through the Theoretical Analysis of the Appendix. Relation To Broader Scientific Literature: How does the work compare to existing bilevel optimization methods, in terms of computational complexity and generalization performance? Essential References Not Discussed: Please see the discussion above. Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your review and address your concerns below: ### 1. How We Apply Gradient Descent on Regularization Parameters Our method works by making regularization parameters explicit and directly optimizable: **L2 Regularization Example:** 1. We rewrite weights as `w = ρθ` where `||θ||₂ = 1` and `ρ` is the norm 2. Training updates: `θₜ₊₁ = θₜ - ηθ∇θL_train(θₜ,ρₜ)` - optimizing on training data 3. Regularization updates: `ρₜ₊₁ = ρₜ - ηρ∇ρL_val(θₜ₊₁,ρₜ)` - optimizing on validation data This makes regularization strength a parameter receiving direct gradient updates, not a hyperparameter requiring grid search. ### 2. Neural Network Convergence While our theoretical guarantees in Section 4 apply to convex settings, our neural network experiments show consistent convergence. The optimization remains stable even with high noise levels, and the validation accuracy improvements are robust across architectures. We've extended our simulations to 600 epochs (https://tinyurl.com/2s45b236) showing stable noise patterns over time. Our additional theorem (https://tinyurl.com/4ckfb9ec) proves that cross-regularization doesn't introduce instability beyond standard neural network training. The stability of our method is evidenced by its outperforming fixed regularization, without the divergence issues sometimes seen in other adaptive methods like variational dropout. ### 3. Application to Other Regularization Techniques Cross-regularization applies to any regularization with: - Parameterizable strength - Differentiable validation performance For data augmentation (Section 9), we parameterize transformation magnitudes and optimize them through validation gradients. Similar approaches could apply to mixup (parameterizing α) or label smoothing (parameterizing smoothing strength). We focused on norm-based and noise-based regularization as they provide clear demonstrations of our approach. Extensions to non-differentiable techniques would require additional work beyond the scope of this paper. ### 4. Distinction from Bilevel Optimization Our approach differs fundamentally from traditional bilevel optimization: 1. We update parameters directly through validation loss, not through the optimization trajectory 2. We maintain separate parameter spaces for model features versus complexity 3. We don't require approximations of second-order derivatives or parameter history This distinction makes our method implementable with standard optimizers and applicable to large models where traditional bilevel methods become computationally intractable. ### 5. Experimental Details Our experimental implementation follows the algorithm in Section 3: - **L2/L1 experiments**: Alternating SGD updates between model parameters and regularization parameters - **Neural networks**: Layer-wise noise parameters optimized every 30 training steps using 3-5 MCMC samples - **Augmentation**: Parameterized transformations updated through validation-based MC averaging The code requires only ~20 lines beyond standard training loops, making implementation straightforward for most existing frameworks. ### 6. Relation to Transfer Learning We thank you for noting the connection to Chen et al. (2022). Both approaches use validation gradients but for different purposes: their work for weighting source tasks in transfer learning, ours for optimizing regularization parameters. We will include this reference. We appreciate your feedback and will address these points in our revision.
Summary: This paper proposes a cross-regularization method that eliminates manual hyperparameter search by directly optimizing weight norms. The approach orthogonally decomposes weight parameters into two complementary components, transforming the optimization problem into two subproblems solved through an alternating optimization strategy.(1) Specifically, during training phases, it maintains fixed model complexity while optimizing parameter directions, whereas (2) validation phases optimize regularization magnitudes. Theoretical analysis demonstrates that this method achieves equivalence to standard cross-validation under certain conditions, while exhibiting faster convergence rates and superior calibration performance. Claims And Evidence: Most of the claims made in the submission is supported clearly. For example: (1) Detailed proofs in the Appendix (Theorems A.1–A.6) under smoothness and strong convexity assumptions. Linear convergence is demonstrated for convex cases. (2) Layer-wise noise patterns (Figure 2) and comparison with PBT demonstrate effectiveness. Extreme noise levels are shown to function without collapse. One negative points should be noted: While reliability diagrams (Figure 3) show improved calibration, no comparison is made to state-of-the-art methods (e.g., deep ensembles, temperature scaling). The ECE metric is reported but lacks statistical significance tests. Methods And Evaluation Criteria: 1. Cross regularization separates model parameters (optimized on training data) and regularization parameters (optimized on validation data), utilizing validation gradients to directly adjust model complexity. This design is theoretically innovative, avoiding the computational overhead of traditional cross-validation while providing continuous generalization feedback. 2. The diabetes dataset (regression) and CIFAR-10 (classification) serve as standard benchmarks, ensuring the comparability of experimental results. However, the study lacks validation on larger-scale datasets (e.g., ImageNet), which constrains the generalizability of the conclusions. Theoretical Claims: The interplay mechanism between data and model uncertainties via dual-level structures needs rigorous mathematical justification. Experimental Designs Or Analyses: The paper lacks validation on larger-scale datasets (e.g., ImageNet), which constrains the generalizability of the conclusions. Supplementary Material: The supplementary material mainly includes additional details, experimental results and the visualization results. Relation To Broader Scientific Literature: The method proposed in the paper is related to methods like variational dropout[1] and Concrete Dropout[2], but differs by optimizing regularization parameters directly using validation gradients. [1] Variational dropout sparsifies deep neural networks. [2] Concrete dropout. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. Novel Idea: The paper introduces cross-regularization, an adaptive method that removes the need for manual tuning of regularization parameters. 2. Broad Applicability: It works with various regularization techniques, including noise-based regularization, data augmentation, and uncertainty calibration. 3. Strong Experiments: The method is tested on different regularization forms and architectures, showing good generalization, adaptability and calibration performance as well. Weaknesses: 1. The paper mainly compares with PBT, but not with other hyperparameter tuning methods. 2. The related work is comprehensive but somewhat outdated. It is recommended to include more recent studies on regularization; for example, most of the references are from around 2016. 3.The structure of this paper may not be clear enough; for example, the paper contains 10 sections. 4. Method 3.3 cross-validation equivalence is not verified while 3.2 is well verified. 5.The study lacks validation on larger-scale datasets (e.g., ImageNet), which constrains the generalizability of the conclusions. Other Comments Or Suggestions: 1. Theoretical analysis relies on convexity assumptions, while experiments on neural networks lack ablation studies (e.g., no comparison to variational dropout or concrete dropout). The claim of "local convergence" (Theorem 4.2) is not empirically validated for deep networks. 2. While reliability diagrams show improved calibration, no comparison is made to state-of-the-art methods (e.g., deep ensembles, temperature scaling). The ECE metric is reported but lacks statistical significance tests. Questions For Authors: Please refer to the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful assessment and constructive feedback. Below, we address the main concerns raised: ### Recent Literature We agree that our literature review requires updating. In the camera-ready version, we will incorporate recent advances in adaptive regularization and complexity control, including: - Foret et al. (2020) "Sharpness-Aware Minimization for Efficiently Improving Generalization" - Nado et al. (2020) "Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift" - Chen et al. (2021) "Weighted Training for Cross-Task Learning" ### Uncertainty Calibration Comparisons We have implemented comparisons with established calibration methods: 1. **Temperature Scaling (Guo et al., 2017)**: Our preliminary results (https://tinyurl.com/3mtacsu9) demonstrate that cross-regularization maintains better calibration throughout training compared to temperature scaling with fixed dropout (σ=0.1 in all layers), while achieving higher performance: | Model | Accuracy | ECE | MCE | |-------|----------|-----|-----| | X-Reg | 79.49 | 0.0376 | 0.0993 | | X-Reg CI | | (0.0325, 0.0454) | | | Uncalibrated | 67.40 | 0.1628 | 0.2810 | | Uncalibrated CI | | (0.1552, 0.1706) | | | Temperature Scaling | 69.63 | 0.0571 | 0.1944 | | Temperature CI | | (0.0500, 0.0649) | | 2. **Deep Ensembles (Lakshminarayanan et al., 2017)**: This comparison is currently underway and will be included in the camera-ready version with appropriate statistical significance tests and confidence intervals. These analyses will demonstrate how cross-regularization provides well-calibrated uncertainties without requiring separate post-hoc calibration steps or ensemble overhead, addressing a key limitation in current practice. ### Additional Hyperparameter Tuning Comparisons While we compared with PBT and fixed dropout, we found that Variational Dropout frequently diverged without implementation modifications that would bias the comparison. In the revised version, we will include: 1. Comparisons with Concrete Dropout (Gal et al., 2017) 2. Analysis of noise dynamics for all methods (where convergent) 3. Computational efficiency metrics (wall-clock time and memory usage) ### Cross-Validation Equivalence Verification Regarding the comment on Method 3.3 (Parameter Partition through Gradient Decomposition) verification, we note that Theorem 4.4 establishes the theoretical equivalence. Our empirical validation in Section 5.1 (Figure 1A-C) demonstrates that cross-regularization converges to the same optimal solution as cross-validated regression, confirming the theoretical result. We will clarify this connection in the revised paper. ### Large-Scale Dataset Validation While our primary contribution is methodological—introducing and analyzing a novel regularization framework—we acknowledge the value of validating on larger-scale datasets. Our current experimental suite (CIFAR-10, diabetes dataset) demonstrates the method's applicability across different regularization types and model architectures, though we recognize that larger datasets would provide additional validation. We are exploring experiments on ImageNet with ResNet-50, focusing on: 1. Confirming that the theoretical and empirical advantages scale to larger datasets 2. Identifying any emergent layer-wise regularization patterns specific to deeper architectures 3. Quantifying computational advantages over traditional hyperparameter tuning at scale These extensions would complement our methodological contribution rather than being essential to validating the core approach. ### Neural Network Theory and Stability We have extended our simulations to 600 epochs (https://tinyurl.com/2s45b236), demonstrating empirical stability of the regularization parameters. For the theoretical foundation, we have developed a new result showing that, under assumptions on regularization parameter behavior, convergence depends only on the standard neural network optimization properties. The full proof is available at: https://tinyurl.com/4ckfb9ec. ### Paper Structure We appreciate the feedback on organization. In the revised version, we will: 1. Merge the Introduction with Background sections 2. Consolidate the application sections (7-9) into a single "Applications and Extensions" section 3. Provide clearer transitions between theoretical development and empirical validation 4. Maintain a consistent narrative flow from problem formulation to practical applications We thank the reviewer for their detailed feedback, which has highlighted important areas for improvement while acknowledging the novel contributions of our cross-regularization framework.
null
null
null
null
null
null
null
null
Robust Offline Reinforcement Learning with Linearly Structured $f$-Divergence Regularization
Accept (poster)
Summary: This paper introduces a new framework, the $d$-rectangular linear robust regularized Markov decision process ($d$-RRMDP), for offline RL and develops a family of algorithms called robust regularized pessimistic value iteration (R2PVI) to learn robust policies. Upper bounds on the sub-optimality gap and the information theoretic lower bounds for $d$-RRMDPs are also provided. Numerical results are included to support the theoretical guarantees. Claims And Evidence: The baseline selection is reasonable, but incorporating more comprehensive evaluation criteria would further strengthen the analysis. Methods And Evaluation Criteria: The baseline selection is reasonable but a more complex evaluation criteria will be benificial. Theoretical Claims: Most claims are well-supported, but the comparison of upper bounds with existing works lacks a thorough analysis of the robustness-efficiency trade-off. Since $\lambda$ controls the degree of regularization and robustness, evaluating only its scale relative to $\beta$ in the suboptimality gap is insufficient. A more comprehensive discussion on how choosing $\lambda$ affects robustness while maintaining efficiency compared to prior methods would strengthen the argument. Experimental Designs Or Analyses: Is that possible to validate the algorithms in a more complex setup, like mujoco? Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: 1. This paper introduces a new framework for enhancing robustness in offline RL, incorporating a broad class of divergences to improve generalization. 2. It establishes upper bounds on suboptimality and information-theoretic lower bounds for $d$-RRMDPs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: This work presents a comprehensive theoretical analysis, establishing both upper and lower bounds for the newly proposed framework and algorithms. Weaknesses: The discussion on the practical impact and applicability of the method in real-world scenarios is limited. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewers for positive feedback on our work. We hope our response fully addresses your questions --- **Q**: A more comprehensive discussion on how choosing λ affects robustness while maintaining efficiency compared to prior methods would strengthen the argument. (more experiment) **A**: Figure 1b demonstrates how the parameter $\lambda$ controls the robustness of our policy, where robustness is defined as the policy's resistance to performance degradation under environmental disturbances. In general, larger values of $\lambda$ yield more robust optimal policies. However, we caution against direct comparisons of $\lambda$ values across different divergence measures (TV, KL, chi2), as varying $\lambda$ fundamentally alters the optimization objective and thus the resulting optimal policy. For a rigorous theoretical analysis of the relationship between λ and the robustness radius $\lambda$, we direct the reviewer to the discussion in Line 318. --- **Q**: Clarify of computation efficiency **A**: While our work primarily focuses on the theoretical analysis of the RRMDP framework, we acknowledge that extending this framework to more general experimental settings represents an important complementary research direction. Such extensions present distinct challenges that differ fundamentally from theoretical analysis, requiring careful consideration of practical implementation issues. Nevertheless, recent research has highlighted the promising potential of extending theoretical frameworks like RRMDP to more complex and realistic environments. - The learning of feature mapping. Recent advancements, such as [1], demonstrate the feasibility of generalizing linear MDPs to more flexible representations. This line of work offers valuable insights into how complex scenarios can be modeled through linear MDP settings. - The development of specialized benchmarks like [2] provides valuable infrastructure for systematically evaluating different approaches to uncertainty modeling, including both DRMDP and RRMDP frameworks. These developments highlight the feasibility of extending theoretical results to practical applications. However, given our current focus on establishing fundamental theoretical guarantees and the associated time constraints, we leave the design of comprehensive experiments in more general environments as an important direction for future research. --- We hope that we have addressed all of your questions/concerns. If you have further questions, we would be happy to answer them and if you don’t, would you kindly consider increasing your score? --- References [1] Zhang, Tianjun, Tongzheng Ren, Mengjiao Yang, Joseph Gonzalez, Dale Schuurmans, and Bo Dai. "Making linear mdps practical via contrastive representation learning." In International Conference on Machine Learning, pp. 26447-26466. PMLR, 2022. [2] Shangding Gu and Laixi Shi and Muning Wen and Ming Jin and Eric Mazumdar and Yuejie Chi and Adam Wierman and Costas Spanos. "Robust Gymnasium: A Unified Modular Benchmark for Robust Reinforcement Learning." arXiv preprint arXiv: 2502.19652. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for providing these details, which address most of my concerns. Thus, I will retain my original positive score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for your positive feedback!
Summary: The authors proposed a framework to solve the d-rectangular linear RRMDP. They extend the previous work under the distributional robust MDP framework by unifying three ways that define the potential MDPs consistent with the offline datasets and provide the theoretical analysis on the proposed method. Some brief experimental results are given to show the effectiveness of the proposed method. Claims And Evidence: The paper is clear-structured and the theoretical performance of the proposed method is good. Methods And Evaluation Criteria: yes Theoretical Claims: 1. In the introduction, the authors raised two main problems of previous method: Theoretical gaps and computation complexity. Although the authors theoretically prove the previous one, about the latter, only a simple experiment is provided, which is not sufficient to prove this point - computation complexity, is well addressed. Could the authors provide more discussion about this? For instance, compare with [Pessimistic q-learning for offline reinforcement learning: Towards optimal sample complexity]. 2.The whole framework is based on the linear MDP, as assumed in 3.1, which may hinder the practical application of the proposed method. Experimental Designs Or Analyses: Experiments are too simple. could the proposed methods be suitable for more complicated benchmarks, such as MuJoCo? Supplementary Material: no Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewers for positive feedback on our work. We hope our response fully addresses your questions --- **Q**: Difference with [Pessimistic q-learning for offline reinforcement learning: Towards optimal sample complexity] **A**: We claim the difference between our work and [Pessimistic q-learning for offline reinforcement learning: Towards optimal sample complexity]. 1. Different objectives: - Shi et al. primarily address dynamics shift and sample complexity challenges in standard MDPs, focusing on learning a policy under the empirical transition dynamics. - Our work studies the Robust RL (RMDP) framework, which optimizes policies under the worst-case transition kernel within a structured uncertainty set. This leads to fundamentally different problem formulations and optimal policies. 2. Different Environments and Assumptions: - Shi et al. operates in tabular settings with finite state and action spaces. - Our work adopts a linear MDP structure, enabling scalable function approximation and extending the analysis to high-dimensional settings. --- **Q**: Clarify of computation efficiency **A**: We emphasize that our computational efficiency is evaluated relative to the standard Distributionally Robust MDP (DRMDP) framework and its associated algorithms. Under both TV and KL divergence measures, our approach achieves superior computation efficiency by leveraging closed-form duality solutions, which are more tractable than those in the conventional DRMDP framework. In large state-action spaces, solving the dual problem under the DRMDP framework becomes computationally prohibitive, whereas our method remains scalable. For a detailed comparison of computational complexity, we refer the reader to Figure 2. --- We hope that we have addressed all of your questions/concerns. If you have further questions, we would be happy to answer them and if you don’t, would you kindly consider increasing your score? --- References [1] Shi, L., Li, G., Wei, Y., Chen, Y., and Chi, Y. (2022). Pessimistic Q-learning for offline reinforcement learning: Towards optimal sample complexity. In Proceedings of the 39th International Conference on Machine Learning, volume 162, pages 19967–20025. PMLR.
Summary: This paper studies ways to learn a good policy in offline RL with Linear MDPs such that the policy is robust to changing the model within some f-divergence neighborhood. More precisely, the authors consider linear MDPs and suppose they have access to an offline data set of trajectories. Unlike standard offline RL, the authors aim to find policies that have optimal robust-regularized value, where this function is defined as the infimum over a set of feasible models (transition densities) of the expected cumulative reward of a policy plus a regularization parameter multiplied by some divergence between the model and some baseline model. The authors consider three f-divergences: TV, KL, and chi^2 divergence and introduce an algorithmic framework based on dynamic programming, which essentially produces pessimistic estimates of the regularized Q function at each time step using an elliptic bonus. The linearity of the model allows for a tractable decomposition of the f-divergence regularized Q-function as represented by a linear function of the features, where the representation is the sum of the reward representing vector and another vector that solves a variational problem representing the regularized value function; again, this decomposition holds due to the linearity of the MDP. The authors express this latter variational problem as alearning problem and demonstrate htat this vector can be learned, which is how the reward estimates are constructed. As mentioned above, pessimism is introduced through a standard elliptic bonus. The authors use this approach to show that their dynamic programming algorithm achieves robustness and, given suffficient feature coverage of the offline dataset, good performance. The authors show through a lower bound that the feature coverage is necessary information theoretically. Finally, the authors conclude with a small experimental suite comparing their approach to pessimistic algorithms that do not incorporate robustness and other recent robust offline algorithms. #### After the rebuttal, I maintain my original (positive) score. I remain a little bit concerned about the numerical issues in long-horizon, sparse reward settings for the KL regularization setting, where we expect to see expressions like $e^{O(H)}$. Claims And Evidence: Yes. Methods And Evaluation Criteria: I think the methods are in general reasonable, given the limited number of naturally occurring linear MDPs. Theoretical Claims: I checked some of the proofs of the theoretical claims, in particular the proofs of Proposition 4.1 and Theorem 5.1. The remaining results are believable given standard RL theory, but the reviewing time was insufficient to provide a detailed check. Experimental Designs Or Analyses: They seem fine. Supplementary Material: See proofs. Relation To Broader Scientific Literature: See summary. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I think one minor weakness of this work is the notion of robustness. While I understand it is important for the analysis, it seems to me that requiring robustness to changes on $\mu$ is a fairly weak notion and more applicable notions of robustness might be policies that are robust to imperfectly learning the featurization (as in low-rank MDPs) or policies that are robust to small perturbations of the linearity assumptions themselves. I also think that the paper could be significantly more clearly written, with notation defined before it is used as opposed to deferred to the appendix. Other Comments Or Suggestions: What is the [0,H] in the second line of equation 4.4? Questions For Authors: - Are there not potential numerical issues with the expression for the w_h in the KL case considering that solving this problem requires exponenentiating something on the order $\Theta(H / \lambda)$? - Is there a formulation for general f-divergences beyond those considered in the paper? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewers for positive feedback on our work. We hope our response fully addresses your questions --- **Q**: “While I understand it is important for the analysis, it seems to me that requiring robustness to changes on $\mu$ is a fairly weak notion and more applicable notions of robustness might be policies that are robust to imperfectly learning the featurization (as in low-rank MDPs) or policies that are robust to small perturbations of the linearity assumptions themselves.” **A**: Thank you for your insightful comment. We fully agree that considering different notions of robustness is crucial in policy learning, as the literature on robust MDPs explores diverse structured uncertainty sets. Regarding your reference to “policies that are robust to imperfectly learning the featurization (as in low-rank MDPs) and or policies that are robust to small perturbations of the linearity assumptions themselves”, we interpret this as pertaining to policies derived under an more general uncertainty set, such as (s,a)-uncertainty set. In such cases, the worst-case transition kernel may not admit linear structure, distinguishing it from our focus. We further highlight the difference between the difference of d-rectangular uncertainty structure and (s,a)-uncertainty set structure. - (s,a)-Uncertainty sets permit correlated perturbations across state-action pairs, leading to more conservative min-max policies (Iyengar, 2005; Wiesemann et al., 2013). - d-Rectangular Uncertainty sets assume independent worst-case dynamics for each (s,a) pair, resulting in less conservative policies. As noted in our work (Lines 75–90, left column), prior research has established the tractability of regularization-based frameworks under (s,a)-uncertainty sets. Our contribution fills the theoretical gaps for d-rectangular settings, which we believe is of independent interest. --- **Q**: The clearly written of robustness **A**: We sincerely thank the advice for removing the notation back to the main text. We remove the notation to the appendix mainly for satisfying the page limit required by the ICML. We have moved the notation to the main text. --- **Q**: “What is the [0,H] in the second line of equation 4.4?" **A**: The subscript of [0,H] refers to clip the vector into interval [0,H]. We have provided illustrations in the main text and notation. --- **Q**: Numerical issues when solving on O(H/lambda) **A**: We appreciate the reviewer's insightful comment regarding potential numerical instability in the KL divergence case. In our current experimental setting (with limited horizon H and bounded value functions), we did not encounter such numerical issues. As for potential numerical issues, we assume that the numerical issues may result from taking log-transformation of $e^{H/\hat{V}}$ (line 250- 258). However, thanks to the homogeneity and linear properties of Robust Regularized Bellman Equation (prop 3.2), the reward can be normalized in [0,1] to adjust to different circumstances, this may avoid numerical problems raised from the estimation of $e^{H/\hat{V}}$. --- **Q**: “Is there a formulation for general f-divergences beyond those considered in the paper?" **A**: For the definition of general f-divergence, we direct the reviewer to the notation introduced in our article (Line 551), which aligns with prior works (e.g., Yang et al. and Panaganti et al.). While our RRMDP framework and algorithms can be extended to general f-divergence (see appendix A for details), we also highlight that we provide specific theoretical analysis with TV, KL, chi2 for two key reasons: - Practical Relevance: the TV, KL, chi2 divergence have already been adopted and commonly used in empirical RL. (Shi et al.) - Theoretical Challenges: Analyzing sample complexity under general f divergence remains an open problem due to the varying dual formulations induced by different divergences (see appendix A for detailed discussion). A unified framework for general f-divergences is an exciting direction for future work. --- We hope that we have addressed all of your questions/concerns. If you have further questions, we would be happy to answer them and if you don’t, would you kindly consider increasing your score? --- References [1] Iyengar, G. N. Robust dynamic programming. Mathematics of Operations Research, 30(2):257–280, 2005. [2] Wiesemann, W., Kuhn, D., Rustem, B.: Robust Markov decision processes. Mathematics of Operations Research 38(1), 153–183 (2013) [3]Yang, W., Wang, H., Kozuno, T., Jordan, S. M., and Zhang, Z. Robust markov decision processes without model estimation. arXiv preprint arXiv:2302.01248, 2023. [4]Panaganti, K., Wierman, A., and Mazumdar, E. Model-free robust $\phi$-divergence reinforcement learning using both offline and online data. arXiv preprint arXiv:2405.05468, 2024a. [5]Shi, L. and Chi, Y. Distributionally robust model-based offline reinforcement learning with near-optimal sample complexity. JMLR, 2024.
null
null
null
null
null
null
null
null
Stabilizing Sample Similarity in Representation via Mitigating Random Consistency
Accept (poster)
Summary: This paper addresses the challenge of measuring and improving representation quality in deep learning models. The authors propose a novel loss function called Pure Square Euclidean Distance (PSED) that measures the discriminative ability of representations by computing the Euclidean distance between a similarity matrix and the true adjacency matrix. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, the experimental evaluation is extensive, covering both benchmark and image datasets, and it demonstrates the method's effectiveness. Theoretical Claims: Yes, The paper provides solid mathematical analysis of PSED's properties, including formal proofs of heterogeneity and unbiasedness, and establishes generalization performance bounds using exponential Orlicz norm-based concentration inequalities. Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No Other Strengths And Weaknesses: Weakness: 1. The experiments focus primarily on fully connected networks instead of exploring more complex architectures, such as CNNs or Transformers, which are common in modern deep learning. 2. PSED still introduces additional computational complexity compared to the standard CE loss, especially for large batch size training. Additional computation cost analysis should be conducted. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer WSCE, &nbsp;&nbsp;&nbsp;&nbsp; We are very grateful for your valuable comments and questions. The responses are as follows: **Response to Weaknesses 1:** &nbsp;&nbsp;&nbsp;&nbsp; For the image dataset, we solely employ VGG and MoCo v3 for feature extraction. The specific methodology is detailed in Section 7.2 of the main text and Appendix 3.0.1. In this modification, we trained more sophisticated networks (ViT and CNN) using our loss function. The experimental results and detailed parameter configurations are as follows. &nbsp;&nbsp;&nbsp;&nbsp; In this study, we conduct experiments utilizing a hardware configuration that includes an Intel (R) Core (TM) i7-14700F CPU, 16GB of RAM, and an NVIDIA GeForce RTX 4060 GPU. The experiments are carried out on a Windows operating system, with Python 3.10 serving as the programming language and the PyTorch 2.4 library employed for model development and training. &nbsp;&nbsp;&nbsp;&nbsp; For the Vision Transformer (ViT) [1] model training, we use a stochastic gradient descent (SGD) optimizer with a learning rate set to 0.001. The batch size is set to 64, and the model is trained for a total of 100 epochs. The dataset is partitioned into training and testing subsets with a ratio of 7:3. &nbsp;&nbsp;&nbsp;&nbsp; Similarly, the ConvNet [2] model employs the AdamW optimizer, with the learning rate set to 0.004. This model also uses a batch size of 64 and undergoes training for 100 epochs. In alignment with the ViT model, the dataset is divided into training and testing sets, preserving the same 7:3 ratio. &nbsp;&nbsp;&nbsp;&nbsp; Performance evaluation of both models is conducted using accuracy and F-measure as the primary metrics. Analysis of the results, as presented in the table, our method shows better performance over existing approaches. This indicates the effectiveness of our method. | Method| ConvNet| ConvNet-PSED | ViT| ViT-PSED | |- |- |- |- |- | |**Accuracy** | | | | | |Mpeg | 0.8238 | **0.8310** | 0.6820 | **0.7095** | |Mnist | 0.9904 | **0.9925** | 0.8590 | **0.9290** | |Pendigits | 0.9708 | **0.9832** | 0.9568 | **0.9711** | |Caltech-101 | 0.6429 | **0.6536** | 0.7393 | **0.8500** | |ImageNet | 0.9651 | **0.9687** | 0.9964 | **0.9968** | |**F-measure** | | | | | |Mpeg | 0.8170 | **0.8280** | 0.5515 | **0.6811** | |Mnist | 0.9900 | **0.9930** | 0.8550 | **0.9277** | |Pendigits | 0.9699 | **0.9826** | 0.9512 | **0.9700** | |Caltech-101 | 0.6100 | **0.6230** | 0.6034 | **0.7866** | |ImageNet | 0.9650 | **0.9695** | 0.9952 | **0.9973** | [1] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations, 2021. [2] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, et al. A ConvNet for the 2020s. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11966–11976, 2022. **Response to Weaknesses 2:** &nbsp;&nbsp;&nbsp;&nbsp; Thank you for your suggestions. We have added a runtime comparison, as shown in the table below. In this study, we conduct experiments utilizing a hardware configuration that includes an Intel (R) Core (TM) i7-12700F CPU, 32GB of RAM, and an NVIDIA GeForce RTX 3060 Ti GPU. For the first 20 benchmark datasets, the table below presents the total time consumption (in seconds) for both training and testing. For image datasets, the recorded values represent the training and prediction time (in seconds) on the fully connected network shown in Figure 3, after feature extraction using either MoCo v3 or VGG. From the table, it can be observed that our method does not significantly increase computational time. | Dataset | CE | CE-SED | CE-inform | CE-PSED | |--|--|--|--|--| | 1 | 2.34 | 3.07 | 3.49 | 6.03 | | 2 | 2.35 | 3.07 | 3.44 | 8.94 | | 3 | 2.44 | 3.93 | 3.57 | 9.14 | | 4 | 2.88 | 8.7 | 4.47 | 11.13 | | 5 | 3.23 | 9.41 | 4.83 | 12.02 | | 6 | 3.33 | 9.65 | 5.03 | 6.46 | | 7 | 6.44 | 13.33 | 12.58 | 8.96 | | 8 | 12.81 | 10.06 | 27.73 | 12.09 | | 9 | 13.75 | 7.62 | 18.54 | 11.58 | | 10 | 13.72 | 7.62 | 8.96 | 9.31 | | 11 | 10.3 | 10.25 | 11.59 | 11.81 | | 12 | 9.99 | 12.97 | 15.32 | 15.93 | | 13 | 9.98 | 12.94 | 14.9 | 15.39 | | 14 | 10.93 | 14.2 | 16.35 | 27.15 | | 15 | 8.29 | 10.76 | 12.74 | 13.26 | | 16 | 15.52 | 30.26 | 33.69 | 24.15 | | 17 | 22.06 | 28.64 | 33.89 | 35.15 | | 18 | 34.3 | 31.46 | 37.09 | 38.56 | | 19 | 38.57 | 49.95 | 63.52 | 68.38 | | 20 | 96.47 | 126.55 | 149.74 | 155.72 | | Mpeg | 8.18 | 9.94 | 16.36 | 18.43 | | Mnist | 27.39 | 35.2 | 51.52 | 55.51 | | Pendigits | 22.4 | 29.61 | 45.99 | 50.55 | | Caltech-101 | 32 | 42.59 | 70.91 | 80.32 | | ImageNet | 46.86 | 58.76 | 86.41 | 96.38 | --- Rebuttal Comment 1.1: Comment: Thanks for the clarification; my question is solved, and I will keep my positive score. However, I also encourage the author to explore methods to improve the efficiency of the loss computation process, as the results show that the additional cost is not minimal.
Summary: The manuscript introduced a new sample similarity measure. The main difference with the existing sample similarity measure is that the new measure mitigates random consistency. The measure forces class-level discrimination. Several theoretical results regarding the measure have been introduced (quality of stohastic estimation, heterogeneity, unbiasedness). Experiments with real-world datasets show that the measure improves classification quality. Claims And Evidence: Main claims of the paper are valid (see below). ``` • A loss function for measuring the ability of the representation layer is proposed, and an explicit solution for the loss function in the version of eliminating random consistency is given. • Through theoretical analysis, the advantages of this metric in heterogeneity and unbiasedness have been demonstrated, and a generalization bound has been provided for the generalization performance of the loss function in fully connected layer network structures. • A fully connected network classification model based on this loss function was proposed, and the effectiveness of the algorithm was verified through extensive experiments. ``` Methods And Evaluation Criteria: If I understood correctly, for image datasets like ImageNet, Caltech-101 you used fully-connected network. This is problematic because convolutional architectures are typically applied for images. Theoretical Claims: No Experimental Designs Or Analyses: Experiment design is mostly correct. Which train/test splits you have used? Supplementary Material: No Relation To Broader Scientific Literature: The manuscript is related to a broader literature of representational similarity measures. Essential References Not Discussed: -- Other Strengths And Weaknesses: It is interesting that you have improved standard cross-entropy loss. Other Comments Or Suggestions: 1. > "Random consistency (RC), resulting from randomness rather than true similarity, often arises in consistency evaluation, particularly with limited samples or noise" this sentence is not clear. Can you please clarify it? 2. K in eq.3 is not defined 3. line 183, Property 3. Matrices I, J are not defined. 4. lin 276, typo: "symmilaritym atrix" Questions For Authors: 1. CE-PSED metric is never defined explicitly, only CE and PSED losses separately. 2. Why the performance gap between CE and CE-PSED is large for some datasets like Mpeg of Caltech-101 ? 3. Which neural architecture you have used for the ImageNet dataset? Ethical Review Concerns: -- Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer GtCV, &nbsp;&nbsp;&nbsp;&nbsp; We are very grateful for your valuable comments and questions. The responses are as follows: **Response to Methods And Evaluation Criteria:** &nbsp;&nbsp;&nbsp;&nbsp; For the image dataset, we firstly employed feature extractors such as MoCo v3 or VGG for processing, with downstream classification processing handled by fully connected layers. The specific methodology is detailed in Section 7.2 of the main text and Appendix 3.0.1. We note that there may be a typesetting oversight in Figure 3, where images were mistakenly labeled as direct inputs. This issue will be corrected in the revised version. Besides, in this modification, we implement our loss function on a more sophisticated network architecture(ViT and CNN). The experimental results and detailed settings are listed in our response to Reviewer WSCE Weakness 1. **Response to Experimental Designs Or Analyses:** &nbsp;&nbsp;&nbsp;&nbsp; We adopted random splitting to generate the training and test sets, as described in Appendix 3.0.1. **Response to Other Comments Or Suggestion 1:** &nbsp;&nbsp;&nbsp;&nbsp; The consistency metric quantifies the degree of agreement or alignment between different variables. Random Consistency (RC) refers to agreement or similarity that occurs purely by chance, not because of any true underlying pattern or relationship. When the sample size is limited or the noise level is high, the random consistency (RC) between the classifier's predicted labels and the true labels increases [1]. For more details on RC, please refer to Question 3 of Reviewer GtCV. [1] Jieting Wang, Yuhua Qian, Feijiang Li. Learning with Mitigating Random Consistency from the Accuracy Measure, Machine Learning, 2020, 109: 2247-2281 **Response to Other Comments Or Suggestion 2-4:** &nbsp;&nbsp;&nbsp;&nbsp; $K$, $I$, and $J$ are defined in Equation (1), though we will include their definitions after each subsequent formula. Thank you for your suggestion. Thank you for pointing these out and we will modify them accordingly. **Response to Question 1:** &nbsp;&nbsp;&nbsp;&nbsp; CE-PSED refers to the summation of CE and PSED. Thank you for pointing this and we will add this in the final version. **Response to Question 2:** &nbsp;&nbsp;&nbsp;&nbsp; This is indeed a profound question. As illustrated in Figures 7 and 9 in the appendix, the t-SNE visualization of cross-entropy (CE) for the MPEG and Caltech-101 datasets fails to reveal clear inter-class structures. By introducing the block-diagonal constraint on the similarity matrix—that is, incorporating PSED as part of the loss function—we can enhance inter-class separation in data representation, thereby improving model performance, shown as Table 7 in Appendix. We will add this explanation in the final version. **Response to Question 3:** &nbsp;&nbsp;&nbsp;&nbsp; For the image datasets, we firstly employed MoCo v3 or VGG for feature extracting, with downstream classification processing handled by fully connected layers. The specific methodology is detailed in Section 7.2 of the main text and Appendix 3.0.1. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. The clarifications should be included into the main part of the paper. I prefer to remain my evaluation unchanged (weak accept).
Summary: This paper proposes a loss function for image classification. It follows the idea of promoting better representation learning and proposes an improvement on mitigating the random consistency of existing methods. Properties such as unbiasedness and generalization bounds are theoretically investigated. Empirical evaluation of both accuracy and f-measures are conducted on multiple image datasets. Claims And Evidence: The first claim of the loss function proposal is well supported by the following contents. The second claim of theoretical investigation is supported by section 4 and 5. The last claim of a model proposal is also supported, but may narrow application scenes of the proposed method. Methods And Evaluation Criteria: The proposed methods aim to solve multi-class classification problems. The evaluation criteria of accuracy and f-measures are suitable, theoretical investgations also show indirect evidence for the proposed method. Theoretical Claims: I checked the section 4 and 5 for theoretical claims. Experimental Designs Or Analyses: Significance tests seems to evaluate the motivation of the proposed method, that is to mitigate the random consistency of existing methods. It would be nice to elaborate more on the connection between the significance tests and the motivation of the proposed method. Supplementary Material: I have checked the detailed experimental results part. Relation To Broader Scientific Literature: This paper relates to the broad literature of classification and loss function design. It also draws attention on representation learning. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths - It is good to also discuss computation complexity of the proposed loss term. Weakness - This paper is not clear for some concepts and shows some difficulty for readers with less literature knowledge to follow. - Section 6 seems to heavily restrict the selection of network architecture, hindering further applications. Other Comments Or Suggestions: N/A Questions For Authors: 1. Existing methods seem to heavily rely on the design and selection of non-informative matrices. Is there any general summary on this point? 2. Please elaborate more on the motivation of the proposed loss function. How it "can address the shortcomings of dinfor(K) and dSED(K)" and how it can mitigate random consistency? 3. It says random consistency has been shown to exists in consistency measures. What is the detailed definition for random consistency, how is harms learning and how it exists in consistency measures? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 72cG, We are very grateful for your valuable comments and questions. The responses are as follows: **Response to Claims And Evidence:** The proposed loss function serves as a universal quality measure for similarity matrices, which are foundational elements across various learning paradigms. Its generalized formulation enables applications in training deep network, metric learning, kernel learning and so on. To demonstrate the broad applicability, we applied it to train advanced network architectures (ViT and CNN). The experimental results and detailed settings are listed in our response to Reviewer WSCE Weakness 1. **Response to Experimental Designs Or Analyses:** By incorporating the random consistency mitigation mechanism (Eq. 5), we have enhanced the homogeneity and unbiasedness of the loss function, thereby improving the model's generalization capability. Significance tests demonstrate that this improvement maintains stable performance advantages across different data partitions, confirming that the enhancement is not attributable to random factors. **Response to Weaknesses 1:** This is an excellent suggestion. To enhance completeness and readability, we will add both a conceptual introduction to RC and an expanded literature review in the appendix of the next edition. For further details, please see Question 3. **Response to Weaknesses 2:** This is a constructive suggestion. Indeed, the opening paragraph of Section 6 has some deficiencies in clarifying the motivation and scope behind our choice of network architecture. In fact, to validate the effectiveness of the loss function, we deliberately opted the proposed loss function for the most basic fully connected network structure. As mentioned earlier, our proposed loss function has relatively broad applicability. In the revised version, we will explicitly elaborate on the scope and adaptability of the method in section 6. **Response to Question 1:** The dependency of existing methods on uninformative matrices can be derived from Eq. 1, since they measure the information content by computing the distance from uninformative matrices (e.g., identity matrix or full one matrix). **Response to Question 2:** The motivation of this paper is to propose a loss function ($d_{PSED}$) for measuring the information content of similarity matrices when ground-truth labels are available. The shortcoming of $d_{infor}(\mathbf{K})$ lies in its fundamental inability to properly assess the block-diagonal structure of similarity matrices, as evidenced by Property 1. To address this issue, $d_{PSED}$ introduces the ground-truth adjacency matrix as the reference standard, thereby enabling quantitative assessment of how closely the similarity matrix approximates the ideal block structure. The limitation of $d_{SED}(\mathbf{K})$ manifests as differential biases toward identity matrix $I$ versus all-ones matrix $J$ across varying class distributions. Through mitigating random consistency, we obtain $d_{PSED}$, a weighted element-wise dissimilarity metric for similarity matrices that achieves unbiased evaluation of both $I$ and $J$. The $d_{PSED}$ mitigates the random consistency issue by subtracting the expectation of $d_{SED}(\mathbf{K})$ under random label permutations (Eq.5). **Response to Question 3:** Consistency metrics measure the agreement between two random variables, while random consistency(RC) refers to spurious agreement arising purely from randomness[1]. A canonical manifestation of RC occurs when examinees achieve measurable test scores solely via random response patterns. The mechanisms by which RC harms the learning process include evaluation distortion, optimization misguidance, and generalization barriers. Failure to deduct the RC baseline may lead to overestimating the model’s actual consistency performance (e.g., an original consistency score of 0.6 vs. a random baseline of 0.2 means the true effective consistency should be 0.4). When loss functions include RC without proper correction, they can induce optimization bias, causing algorithms to spuriously improve consistency metrics by overfitting to noise [2] or data bias[3,4] instead of learning genuine data patterns. These would consequently impair the model's generalization ability. [1]Wang J, Qian Y, Li F, et al. Generalization performance of pure accuracy and its application in selective ensemble learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(2): 1798-1816. [2]Wang J, Qian Y, Li F. Learning with mitigating random consistency from the accuracy measure. Machine Learning, 2020, 109: 2247-2281. [3]Li J, Qian Y, Wang J, et al. PHSIC against random consistency and its application in causal inference. IJCAI, 2024, 5(10): 15-20. [4]Vinh N X, Epps J, Bailey J. Information theoretic measures for clusterings comparison: Variants, Properties, Normalization and Correction for Chance. Journal of Machine Learning Research, 2010, 11: 2837–2854.
null
null
null
null
null
null
null
null
Scaling Collapse Reveals Universal Dynamics in Compute-Optimally Trained Neural Networks
Accept (oral)
Summary: Assuming that any model is trained for a number of iterations that are compute-optimal given its size, this paper shows, both empirically and theoretically, that the training curves of models of different widths are identical up to an affine transformation. Deviations due to randomness in the training procedure are larger than those across model width. Furthermore the paper shows, both empirically and theoretically, that the result holds across a variety of learning rate schedules. These results can be used to improve scaling predictions. Claims And Evidence: This work can potentially help scaling up models more accurately. The combination of empirical and theoretical results, and the agreement between the two, is compelling. The density and significance of results is high. Methods And Evaluation Criteria: In CIFAR-5M experiments, the model width varies very little, less than a factor of three, which corresponds to a one order magnitude variation of the parameter count. Fitting power laws with such small variation is unreliable. Theoretical Claims: It remains unclear how much useful is Eq.6. The proof of collapse in Appendix G assumes that the sum in Eq.6 can be approximated by two terms. Eq.7 is also a special case of Eq.6 with two terms of the sum only. So it remains unclear whether we need Eq.6 at all. Experimental Designs Or Analyses: OK Supplementary Material: NA Relation To Broader Scientific Literature: OK Essential References Not Discussed: OK Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: If it’s true that the primary effect of learning rate decay is annealing SGD noise, then can we determine what is the best way of decaying the learning rate? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your feedback and supportive review. We have attached some additional figures [here](https://drive.google.com/file/d/1ZkobNTqh90nnUcunKqUT2Dyx3T4zAb5O/view), and address your specific questions below. **On the limited range of widths in CIFAR-5M experiments.** We acknowledge the limitations of this experiment, which was a combination of our computational constraints as well as the small size of the dataset itself. We set the maximum width to 2048 in the CIFAR-5M experiments to 1) limit the computational cost of our experiments, and 2) avoid excessive repetition of the training data. Based on the estimated compute-optimal training horizon, the width 2048 model already required training on the CIFAR-5M dataset (which only has 5M training images) for over 10 epochs. A larger dataset is necessary to train even wider models to compute-optimality without overfitting effects. We note that our MLP experiment does not suffer from this data constraint, where we scaled up the width by 8x from 512 to 4096, corresponding to a 64x scale up in model size, and demonstrated supercollapse across all models. In addition, we demonstrate in Figure 1 (middle, right) [here](https://drive.google.com/file/d/1ZkobNTqh90nnUcunKqUT2Dyx3T4zAb5O/view) that supercollapse occurs in two additional data modalities: chess game string representations and chemical structures, and along one more scaling dimension: depth, further establishing the generality of our observations. **On the utility of Equation 6.** We emphasize that our claim regarding the dominance of two terms in Equation 54 is mathematically valid in the asymptotic limit of large $D$, not merely a convenient approximation. This generalized analysis of power law loss curves extends the connection between supercollapse and compute-optimal scaling beyond the simplified form in Equation 7. This extension is particularly important because recent theoretical work [1] demonstrates that this broader class of loss curves naturally emerges in high-dimensional regression models, whereas the two-term power law in Equation 7 fails to capture significant regions of the phase diagram (Figure 2, [1]). To empirically validate our theoretical findings, Figure 6 attached via [this link](https://drive.google.com/file/d/1ZkobNTqh90nnUcunKqUT2Dyx3T4zAb5O/view) presents additional experiments using the Power Law Random Features (PLRF) model introduced in Section 3.1. Unlike our setup in Figure 4, we specifically selected values of $\alpha$ and $\beta$ to produce loss curves asymptotically described by three power laws (detailed in [1]), which cannot be accurately represented by Equation 6. These results confirm that supercollapse occurs exactly as predicted by our analysis in Appendix G. Without our generalized framework, we would lack the theoretical foundation to explain supercollapse in these more complex settings. **On determining the optimal decay schedule** For a fixed $\hat{K}$, equation 10 suggests that decaying the learning rate as late as possible to maximize total gradient flow time is best. However, as we show in Figure 5 and Figure 9 in the paper, the best fit $\hat{K}$ has non-trivial dependence on the schedule and correlates with the preconditioned NTK trace, which tends to be higher for schedules that start decaying earlier. This gives a tradeoff between progress in gradient flow time (favoring later decay) and more favorable geometry (favoring early decay). Understanding this tradeoff requires a deeper understanding of $\hat{K}$, its dynamics, and its relationship to the schedule. We believe this is an exciting direction for future research on optimizing learning rate schedules. Thank you again for your review. We hope we have addressed your questions. Please let us know if we can clarify anything further. [1] Paquette et al. "4+3 phases of compute-optimal neural scaling laws."
Summary: This paper introduces the concept of "supercollapse," where the loss curves of networks trained under compute-optimal conditions collapse to a universal curve after affine rescaling. The authors demonstrate this phenomenon across various architectures (transformers and mlps, with different dim sizes) and learning rate schedules, providing both empirical evidence and theoretical insights into why and how this collapse occurs. This study contributes to the understanding of scaling behaviors in networks and suggests new ways to optimize training processes. Claims And Evidence: Pros: The study provides detailed experimental setups and rigorous theoretical analysis to support its claims, including: + Supercollapse is observable across various architectures and learning rate schedules. + Accurate estimation of compute-optimal parameters is essential for observing supercollapse. Cons: However, some concerns remain. The training datasets are somewhat limited, as only CIFAR-5M was used, with next-pixel predictions as the employed task. This does not fully align with the motivation to explore the dynamics of large models, particularly large language models (LLMs) or multimodal language models, which are the primary focus. Ideally, both inputs and outputs should consist of text tokens. It remains uncertain whether the conclusions drawn from CIFAR-5M and the next-pixel prediction task are applicable to LLM pre-training. Methods And Evaluation Criteria: The methods are appropriate and clearly described, but the evaluation is weak in terms of the dataset. Theoretical Claims: Theoretical analysis is provided to support the claims. Experimental Designs Or Analyses: The experimental design is solid and well-executed, both in terms of the model and the conditions. However, the inclusion of additional datasets, as previously mentioned, would further enhance the study. Supplementary Material: Yes. Most of content of Supplementary Material are reviewed. Relation To Broader Scientific Literature: This paper provides valuable insights that could support a wide range of model training-related tasks. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This paper attempts to address an important issue: predicting the universal curve of a model during training. It has the potential for a significant impact and could be highly beneficial for neural network training. However, due to the limited and insufficient diversity in the training data, the conclusions require further justification. Additionally, there are a few typos and grammar errors: + Line 135: "80% of training" should be "80% of training." + Line 156: "denotes" should be "denote." Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. Your point about the limited diversity of the datasets is well taken. To address this point, we conducted additional experiments on two non-image domains: the [Lichess chess games dataset](https://huggingface.co/datasets/Lichess/chess960-chess-games) dataset of 22M chess games recorded in algebraic chess notation, and [SMILES-250M](https://huggingface.co/datasets/HoangHa/SMILES-250M), a large collection of chemical structure representations. In Figure 1 (middle, right) of the [linked PDF](https://drive.google.com/file/d/1ZkobNTqh90nnUcunKqUT2Dyx3T4zAb5O/view), we show the next-token prediction loss curves of compute-optimally trained transformers and their rescaled versions. Our results demonstrate that supercollapse occurs consistently across both these datasets. In addition, we showed that scaling depth also gives supercollapse in the MLP setting similar to scaling width (Figure 1 left). Combined with our original findings on CIFAR-5M, MLP regression, and high-dimensional linear models, these results provide compelling evidence that supercollapse is a general phenomenon spanning diverse data modalities and model architectures. Regarding LLM pre-training, computational constraints prevented us from conducting meaningful experiments. The smallest model in the Chinchilla scaling law [1] study has approximately 80M parameters, similar to our largest tested model. Our limited computation meant that we could not perform high-quality scaling studies in this setting. We believe that our consistent observation about supercollapse across multiple architectures (transformer, MLP, linear model) and data modalities (images, chess, chemical structures, regression), as well as our theoretical analysis establishing the connection between supercollapse and sum-of-power-law loss curves motivates future studies on LLM pre-training, which we hope to pursue in followup work. While we recognize the potential practical impact of our findings for LLM pre-training, we would like to emphasize that the primary contribution of this work is establishing supercollapse as a novel and intriguing phenomenon across diverse settings, and demonstrating that studying universality in loss curves offers a promising avenue for advancing our scientific understanding of scaling. We will revise the paper to articulate this scientific focus more clearly. Thank you again for your constructive feedback, which has helped improve the clarity and positioning of our paper. We will also correct the typos you mentioned in the revised version of the paper. We hope you will consider raising your score in light of our response and additional experiments. [1] Hoffmann et al. "Training compute-optimal large language models." --- Rebuttal Comment 1.1: Comment: The new results are quite impressive. I would like to increase the score. Further testing across different modalities and datasets will undoubtedly enhance the overall quality of the paper.
Summary: The paper intvestigates the phenomenon of "supercollapse," where loss curves from compute-optimally trained neural networks collapse to a single universal curve, after an affine rescaling. This universality is observed across different model sizes and learning rate schedules, and it is characterized by deviations smaller than those caused by stochastic optimization noise. The authors prove that training to a Pareto frontier is necessary for supercollapse and show that learning rate schedules deform loss curves in a predictable, scale-invariant way. They validate their claims using transformer models on CIFAR-5M and MLPs on a synthetic regression task, showing that supercollapse occurs across architectures and tasks. The paper suggests that the study of full loss curves, instead than just the final loss, can improve scaling predictions and provide insights into the training dynamics of large models. Claims And Evidence: The claim are supported by extensive numerical evidence. Methods And Evaluation Criteria: The methods are appropriate for the problem. Theoretical Claims: I check the proof of Theorem 3.1 and I have no issues to discuss. Experimental Designs Or Analyses: I have no issues to discuss. Supplementary Material: I reviewed most of the supplementary material, focusing in particular on sections A-E, G-H. Relation To Broader Scientific Literature: The key contributions of the paper relate to the literature on scaling laws and training dynamics in machine learning, expanding on previous work on empirically observed power-law relationships between compute, model size, and test risk. It goes further by uncovering a universal structure in full loss trajectories, moving beyond the usual focus on final loss values. The paper also links its empirical findings to theoretical research in high-dimensional optimization and random features. Essential References Not Discussed: I am not aware of any essential references that have been omitted. Other Strengths And Weaknesses: Strengths: The paper introduces the concept of supercollapse and extends the literature on scaling laws by revealing universal dynamics in the full loss trajectories. It combines a theoretical analysis (showing the necessity of a power-law Pareto frontier) and empirical validation across architectures (transformers and MLPs) and tasks (CIFAR-5M and synthetic regression), as a support to its claims. The findings and insights provided by this work have practical implications for scaling predictions and optimization strategies, suggesting that small-scale experiments can predict the behavior of larger models. Weaknesses: Some key quantities are not clearly defined in the main text (for instance $\gamma$ and $P_0$ on line 147, $k$ in equation 6, and $M$ on line 136). While some of them can be deduced from the context and others are clarified in the supplementary material, I would still suggest that the authors clarify all notation directly in the main text to ensure accessibility for the reader. Other Comments Or Suggestions: - Typo on line 027: repetition of "the" - Typo on line 135, second column: missing period - Typo in eq. 2: there is a right square bracket without the corresponding left one - Could you clarify the recursive definition of $k_i$ on line 138? Questions For Authors: I have no additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your careful reading of our draft and supportive review. We will update the paper to ensure that all notations are clearly defined in the main text and fix the typos you identified. Specifically regarding the definition of $k_i$, we first sample a scalar $s_i$ from the power-law distribution defined in L140 and then sample a random unit vector $v_i$ and define $k_i$ as $s \cdot v_i.$ This will be clarified in the text. We have attached a PDF [here](https://drive.google.com/file/d/1ZkobNTqh90nnUcunKqUT2Dyx3T4zAb5O/view) with additional experiments requested by other reviewers to further demonstrate the generality of supercollapse. If you are interested, the details can be found in our other rebuttals. Let us know if we can clarify anything further. --- Rebuttal Comment 1.1: Comment: Thanks for your reply and additional experiments.
Summary: - When neural networks are trained under compute optimality, their loss curves across model widths collapses to a single universal curve under a simple affine rescaling - The authors call this "supercollapse" because deviations between curves are smaller than fluctuations from multiple training runs (where the noise arises from random initialization and data ordering) - Supercollapse occurs across various learning rate schedules (so long as they're decaying) and architectures, but requires accurate estimation of the compute-optimal data exponent and irreducible loss - The authors prove that at power-law Pareto frontier is necessary (and in some cases, sufficient) for supercollapse to emerge - Several other complementary results ## Update After Rebuttal I think this is a great paper. I don't think it deserves to be an Oral (in my Claims and Evidence, I pinpointed two steps that I think this paper needed to do in order to be that outstanding) but I think the authors have done a wonderful job. Claims And Evidence: - Overall, I think this is a solid and thorough paper. - In my review, I scrutinize some specific components. I welcome the authors to improve the paper based on my comments, or to convince me that my comments are mistaken. Given compelling evidence for either, I will increase my score. - I think this paper could be an awarded paper if its practical relevance was increased. One way would be to show that supercollapse holds for language model ladders pretrained at scale, but this is likely an expensive request that the authors may be unable to afford. Another way would be to quantitatively compare how a “supercollapse-based” scaling law predictor compares to standard scaling law predictors, ideally in a backtested manner; the authors hint at this (“suggesting that supercollapse itself can be used to improve scaling predictions”) but stop short of actually applying this claim practically. I’m also unsure of whether a supercollapse predictor is feasible since Figure 3 suggests that $\gamma$ and $L_0$ are the most sensitive, and I would intuitively guess that these two parameters require large models to accurately estimate, but I could be wrong! - To me, the point of highest uncertainty in this paper is its generality. My doubt stems from the fact the authors only swept fixed-depth networks. I couldn’t find a reason, I don’t understand why the authors did this, and there is now a question in my mind of whether supercollapse is as general as the authors claim it is. Methods And Evaluation Criteria: Yes. I add specific comments in later sections. **Edit: Based on the authors' response, I'm upgrading my score to a 5** > $\theta$ refers to the fraction of optimal compute. To clarify, I was suggesting that $\theta$ should be defined in the main text, likely in Section 2.2. It's possible I'm missing its definition in the main text, but I couldn't find one (and while $\theta$'s meaning can be inferred contextually, I think one should prefer being explicit). Theoretical Claims: - Theorem 3.1 is a nice result. - Section 3.1 Sum of Power Laws Collapse Exactly: To clarify my understanding, Equations 7-9 (inclusive) hold only for the PLRF model? If so, could the authors please help me understand why a power law compute-optimal Pareto frontier is only sufficient for collapse near $\theta=1$? - Section 3.2 Line 321 Right Column: The manuscript states, “This result suggests that the primary effect of learning rate decay is to accelerate the loss curves by annealing the noise introduced by SGD.” I’m confused by this claim. I would think that the noise introduced by SGD is inherent to the dataset, the batch size and the model. How is the learning rate able to suppress noise in SGD? - Equation 2: Since $s$ appears in the expectation and variance, I would expect it to appear inside the squared brackets. Is $s$ suppressed for brevity? Or have I misunderstood something? - Section 3.3 Lines 415-420 Left Column: “At late times… smaller than $\sigma_D$.” I have no intuition for this result. Do the authors? If so, could they please add it? Experimental Designs Or Analyses: - I can’t tell whether I’m reading too much into this, but the sentence “We scale the model size by increasing the width D and keep depth fixed.” raised three questions: (1) Why did the authors not scale the models with depth? (2) How do the results change (if at all) when depth is scaled with width or independently from width? (3) More generally, are there realistic scaling recipes under which supercollapse does not appear? - Appendix C.1: I’m not quite sure I understand the methodology here. To confirm my understanding: (a) it appears that parameters were swept from ~7e6 to 7e7 for Transformer and from 2e6 to 7e7 for MLP? (b) For each parameter size, to vary the compute, presumably you swept a variety of tokens? What values of tokens were swept exactly, and why is the range of compute per parameter note the same? (c) Why were these models trained without learning rate decay? (d) In Figure 7, for each parameter value, I would expect to see some dot indicating the optimal compute for that parameter size, but I don’t; am I misunderstanding what this figure purports to show, or should the optimal compute at each parameter size be indicated by a unique marker? (e) I don’t know how much this matters, but how was the power law fit? Linear regression in log-log space, something more akin to Chinchilla with Huber loss, something else? - Section 2.2: What is $\theta$? The fraction of optimal compute, yes? If so, it might be good to explicitly state this in the text. - Section 2.2: nit: Right column, line 135: Missing period after “training”. - Section 2.2: I feel like I’ve misunderstood something about the “last 80% of training”. Figure 1 (middle) suggests to me that the curves overlap for the last 90% of training. Am I misreading the data? Or where is this 80% figure coming from? - Section 2.3: Line 174 left column “In Figure 1 (right)”, am I confused or should this refer to Figure 2? - Figure 2: There appear to be no error bars or notions of uncertainty. Could the authors please repeat this experiment multiple times, or if that is too expensive, bootstrap it? I’m unsure whether the cross-D collapse deviation $\Delta(t)$ is indeed meaningfully smaller than the cross-seed fluctuation. - Figure 2: What do the different hues correspond to? - Line 193-195, left column: “This shows that the end point of a specific loss curve efficiently encodes much of the randomness throughout the trajectory.” I’m not quite sure I understand this sentence. Could the authors please clarify? - Section 2.3: I'm uncertain whether the subsection's result is profound or trivial. We certainly expect this result at $\theta=1$. The authors point out, the interesting aspect is that it stays smaller over an $O(1)$ fraction of training time, but I imagine one could handwave that we expect smooth(ish) scaling trends. - Figure 3 (c) Left: Could the authors please consider a larger range of $P_0$ values for the sensitivity analysis? I want to know how much $P_0$ can really be stretched before the collapse deviation increases substantially. - (minor) Figure 9 (a): Why is variance so large in the vertical direction? Supplementary Material: Yes. Primarily appendices A, B, C. Relation To Broader Scientific Literature: I think this paper contextualizes itself well w.r.t. large scale empirical scaling laws e.g., Kaplan, Hoffman, and analytically tractable models, e.g., Paquette. Essential References Not Discussed: None that spring to mind. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: - nit: “constan” on Line 533 should be “constant” - nit: On line 437 left column, should "explaining why schedules at decay faster have..." be "explaining why schedules that decay faster have" ? Questions For Authors: See earlier boxes. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for the constructive feedback! We provide several additional results and clarifications here and will include them in the updated paper. Corresponding figures are in [the linked PDF](https://drive.google.com/file/d/1ZkobNTqh90nnUcunKqUT2Dyx3T4zAb5O/view). **Further evidence for the generality of supercollapse.** We focused on width scaling as it's the most well-studied dimension with extensive literature [1,2,3,4] on optimal learning rate and initialization scaling, which made finding the proper hyperparameter scaling needed for supercollapse much easier. Inspired by your question, we conducted a depth-scaling experiment by adding residual connections to our MLP (creating a well-defined depth scaling limit) and using depth-µP [5] to scale learning rate and initialization. Figure 1 (left) in the linked PDF shows depth scaling also produces supercollapse, confirming it's not width-specific; this plot is consistent with our theory linking supercollapse to power law compute-loss pareto frontiers. In addition, Figure 1 (middle, right) demonstrates supercollapse occurring in two additional data modalities: chess game string representations and chemical structures, further establishing its generality. **Realistic scaling recipes that don’t admit supercollapse.** We expect supercollapse to break down when hyperparameters such as the learning rate are not properly scaled with the model. In Figure 3 of the linked PDF, we ablate µP while keeping learning rate constant for all widths, which makes the learning rate increasingly suboptimal as width scales [4]. Without µP the rescaled curves don’t collapse. **Practical relevance of supercollapse.** Exploring supercollapse's practical applications is an exciting direction for future work! Our primary goal in this work is to establish supercollapse as a novel phenomenon across multiple data modalities and architectures and to show how studying loss curve universality enhances our understanding of scaling. That said, we believe this work already provides compelling evidence of supercollapse's practical value. Figures 3(a) and 4 in the paper show how absent supercollapse reveals errors in estimated data exponent $\gamma$. Similarly, MLP without µP (Figure 3, linked PDF) shows comparable performance to MLP with μP on the loss-compute plot except for the largest model, but the absence of supercollapse reveals scaling errors. Supercollapse is a powerful diagnostic tool, making errors more detectable than when evaluating final losses alone. **Methodology for fitting optimal training compute.** We estimate optimal compute by training models for a fixed number of steps (7e5 for transformers, 1e6 for MLPs) and identifying the compute-optimal point. We find models with lowest loss at each compute value on a logarithmic grid to create the compute-loss Pareto frontier (Figure 2). Our discrete model sizes mean each model is optimal for a range of compute values, producing multiple y-values per x-value in Figure 7 (Appendix C.1). We fit a power law to these points using log-log linear regression to estimate optimal compute per model size, similar to Chinchilla’s Figure 1 [6]. We use constant schedule to efficiently sweep compute/token budgets, rather than running separate experiments for each budget as in Chinchilla. This reduces experimental costs, a strategy also used in [7] for LLM scaling law studies. **Additional clarifications** * $\theta$ refers to the fraction of optimal compute. * Figure 4 in linked PDF shows collapse deviation std dev across 5 trials. Colors indicate width (now in the legend). * $\Delta < \sigma$ for an $O(1)$ interval isn't trivial as $\Delta$ can grow quickly away from $\theta=1$, as we showed for constant schedule in Figure 6 of the paper. * SGD noise scales as $\Theta(\eta/B)$ (see Eq 33, Appendix E). Annealing LR thus reduces noise similar to increasing batch size. * Intuition for L415-420: variance of noise difference between timepoints is smaller than individual variances when sufficiently correlated. * We expanded $P_0$ range for sensitivity analysis in Figure 5 (linked PDF). * We suppressed $s$ in Eq 2 for brevity. * Outliers in Figure 9(a) occur when $\Delta\eta$ approaches 0 in multi-cycle cosine schedule, making $\hat{K}$ divergent and not well-defined. [1] Yang et al. "Feature learning in infinite-width neural networks." [2] Yang et al. "Tensor programs ivb: Adaptive optimization in the infinite-width limit." [3] Bordelon et al. "Self-consistent dynamical field theory of kernel evolution in wide neural networks." [4] Everett et al. "Scaling exponents across parameterizations and optimizers." [5] Yang et al. "Feature learning in infinite-depth neural networks." [6] Hoffmann et al. "Training compute-optimal large language models." [7] McLeish, Sean, et al. "Gemstones: A Model Suite for Multi-Faceted Scaling Laws."
null
null
null
null
null
null
Identifying and Understanding Cross-Class Features in Adversarial Training
Accept (poster)
Summary: Adversarial Training (AT) is a widely adopted technique for enhancing the robustness of deep learning models against adversarial examples. However, a critical challenge associated with AT is robust overfitting. As training progresses, the robust accuracy on the training set continues to improve, yet the robust accuracy on the test set stops increasing and instead begins to decline. In this paper, the authors investigate the phenomenon of robust overfitting through the lens of class-wise feature attribution. They observe that the decline in robust accuracy occurs when adversarial training (AT) shifts its focus away from cross-class features and instead relies solely on class-specific features. Based on this observation, they hypothesize that robust overfitting arises due to the reduced reliance on cross-class features during AT. To support their hypothesis, they provide both theoretical analysis and extensive empirical evidence. Claims And Evidence: I find the authors’ evidence convincing, as they provide solid theoretical proofs for each theorem they propose. Additionally, they validate their findings through extensive experiments conducted across multiple datasets. Methods And Evaluation Criteria: The metrics and benchmark datasets used in their experiments are appropriate. Theoretical Claims: I have not specifically verified the correctness of their theoretical proofs. Experimental Designs Or Analyses: The authors conduct their experiments on various classification datasets, including CIFAR-10 and CIFAR-100, to visualize feature attribution correlations at different training stages. Their experimental results effectively support their findings, demonstrating that AT tends to reduce reliance on cross-class features as the model becomes overfitted. Supplementary Material: The supplementary material primarily includes theoretical proofs for the proposed theorems, along with additional experimental results and the pseudo-algorithm detailing the computation of the feature attribution correlation matrix. Relation To Broader Scientific Literature: This work primarily explores why traditional AT suffers from robust overfitting and how AT with smooth label can mitigate this issue. By highlighting the importance of preserving cross-class features, the study provides valuable insights that could guide future research toward improving AT and enhancing model robustness. Essential References Not Discussed: n/a Other Strengths And Weaknesses: The strength of this paper lies in its solid evidence and thorough explanation of why Adversarial Training (AT) suffers from robust overfitting. It also emphasizes the importance of utilizing cross-class features, demonstrating how this approach can help alleviate the issue and improve the effectiveness of AT. However, a concern I have is the paper’s discussion on its significance for future research. While previous work may not have fully understood how AT utilizes class features, we still have an intuitive understanding of the underlying logic. Even with the clearer evidence provided, the challenge remains in how to preserve reliance on cross-class features while continuing to improve the robustness of the model. Other Comments Or Suggestions: I suggest that the authors provide more detailed explanations of the concepts introduced in their paper. For instance, the terms "robust accuracy" and "robust loss" are used without further clarification. I assume these terms refer specifically to the accuracy and loss measured on adversarial examples, but it would be helpful for the authors to explicitly define them for clarity. Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer RcTN, Thank you for your valuable feedback. We address your concerns below. --- **Q1**: A concern I have is the paper’s discussion on its significance for future research. While previous work may not have fully understood how AT utilizes class features, we still have an intuitive understanding of the underlying logic. Even with the clearer evidence provided, the challenge remains in how to preserve reliance on cross-class features while continuing to improve the robustness of the model. **A1**: Thank you for the thoughtful comment. We list some potential directions for further applications of our theory as follows: - **Data (re)sampling**. Since more generated data are prone to be helpful for advancing adversarial robustness [1], it requires significantly more data and computational costs. From the cross-class feature perspective, adaptively sampling generated data **with considerations of class-wise relationship** may improve the efficiency of large-scale AT and decrease the forgetting of cross-class features. - **AT configurations**. Customizing AT configurations like perturbation margins or neighborhoods is useful for improving robustness [2]. Since cross-class features are more sensitive against robust loss, customizing AT configurations for different samples or classes **based on class-wise relationships** may mitigate this sensitivity and further improve robustness, as shown in Theorem 1. - **Robust module design:** The architecture of a model and the mechanisms of activation play a crucial role in adversarial robustness [3]. Therefore, designing modules that either implicitly or explicitly emphasize cross-class features may enhance robustness. For example, calibrating channel activation can improve robustness [4], thus creating activation mechanisms that preserve more cross-class features can further contribute to this improvement. In addition to these AT algorithms, we would also like to highlight the **theoretical modeling potential** of our work. Similar to the robust/non-robust feature decomposition [5], which has been applied in many subsequent theoretical works, e.g. [6,7,8], our cross-class feature model has the potential for more in-depth modeling of adversarial robustness, contributing new tools in its theoretical analysis. [1] Better Diffusion Models Further Improve Adversarial Training. ICML 2023 [2] CAT: Customized Adversarial Training for Improved Robustness. IJCAI 2022 [3] Robust Principles: Architectural Design Principles for Adversarially Robust CNNs. BMVC 2023 [4] Improving Adversarial Robustness via Channel-wise Activation Suppressing. ICLR 2021 [5] Adversarial examples are not bugs, they are features. NeurIPS 2019 [6] On the Tradeoff Between Robustness and Fairness. NeurIPS 2022 [7] Understanding the Impact of Adversarial Robustness on Accuracy Disparity. ICML 2023 [8] Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data. ICLR 2025 --- **Q2**: I suggest that the authors provide more detailed explanations of the concepts introduced in their paper. For instance, the terms "robust accuracy" and "robust loss" are used without further clarification. I assume these terms refer specifically to the accuracy and loss measured on adversarial examples, but it would be helpful for the authors to explicitly define them for clarity. **A2**: Thanks for your careful reading. Your assumption is correct, we will clarify these definitions in our revision as follows: - **Robust accuracy** is the accuracy of the model on adversarial examples. - **Robust loss** is the average cross-entropy loss of the model on adversarial examples. --- We truly appreciate your valuable and constructive feedback. If you have any further questions or concerns, please let us know.
Summary: This paper explores a unique characteristic of adversarial training from the perspective of class-wise feature attribution. Specifically, it highlights that data often contain **cross-class features**, such as the feature of wheels shared by the automobile and truck classes in the CIFAR-10 dataset. The authors discover that as training progresses and the robust loss decreases beyond a certain threshold, the model begins to abandon cross-class features. Consequently, the model makes decisions primarily based on class-specific features rather than cross-class ones, which contributes to improved robustness. Through their experiments, they demonstrate that this phenomenon can be observed across various adversarial training setups. ## update after rebuttal The insights are valuable and merit inclusion in the main paper, though their earlier presence—particularly the discussion on applications—would have strengthened the submission. Therefore, I maintain my score in favor of acceptance. Claims And Evidence: The main claim of the paper can be summarized as follows: **(Main claim)** Cross-class features are ignored after the optimal point during adversarial training, leading to adversarial overfitting. The main evidence supporting this claim is presented in Figure 2 and Figure 8. Since the paper focuses on the unique characteristics of adversarial training, Figure 8, which illustrates the tendency of standard training, is highly significant. The results demonstrate that adversarial and standard training exhibit different behaviors (so I personally recommend that the authors mention Figure 8 earlier in the paper). Furthermore, the definition of Class Attribution Similarity (CAS) is quite reasonable. The experiments cover various models and datasets, which validates their claim. I have several questions for the authors: 1) **A more solid explanation for Figure 8 is needed** In Section 5.3, the authors explain why standard training does not exhibit a similar tendency: "This observation is consistent with the characteristic of standard training, which generally does not exhibit overfitting." However, as empirically demonstrated in [1], standard training also suffers from overfitting. Please provide a more solid explanation for this observation. 2) **The potential use of cross-class features** While I acknowledge the importance of this observation, the paper does not provide the benefits of knowledge distillation (or soft-labeling) in terms of adversarial robustness. While they can mitigate the observed phenomenon, it seems they are not effective in improving the robsut accuracy based on Figure 3. Furthermore, class-aware features are difficult to identify, as noted in [2]. Therefore, they cannot be directly used as a regularizer or training trick for adversarial training. How, then, can this insight be leveraged to improve adversarial robustness? If there exists at least a potential direction, it would be valuable to explore these phenomena further. 3) **Regarding catastrophic overfitting** While the current observation on multi-step adversarial training is quite interesting, catastrophic overfitting is a well-known issue in adversarial training [3-4]. Can the observed phenomenon also be detected in the single-step adversarial training framework? **Suggestions:** 1) Instead of using numerical labels in all figures, using class labels (e.g., "truck") would provide a more intuitive understanding for readers. As the authors mentioned, automobile (label=1) shows high Class Attribution Similarity (CAS) with truck (label=9). If class names were displayed in the figures, the observations would be clearer. 2) In Equation (5), \( A \) is indexed as \( A_i \), but in Equation (6), it changes to \( A^i \). Please ensure consistency in notation. - [1] Jiang, Yiding, et al. "Fantastic generalization measures and where to find them." arXiv preprint arXiv:1912.02178 (2019). - [2] Tsipras, Dimitris, et al. "Robustness may be at odds with accuracy." arXiv preprint arXiv:1805.12152 (2018). - [3] Wong, Eric, Leslie Rice, and J. Zico Kolter. "Fast is better than free: Revisiting adversarial training." arXiv preprint arXiv:2001.03994 (2020). - [4] Kim, Hoki, Woojin Lee, and Jaewook Lee. "Understanding catastrophic overfitting in single-step adversarial training." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 9. 2021. Methods And Evaluation Criteria: The experiments cover various models and datasets, which validates the observation of the paper. Theoretical Claims: I've checked all theoretical claims and verified there is no problem. Experimental Designs Or Analyses: Refer to Claims And Evidence. Supplementary Material: I've read the entire supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Refer to Claims And Evidence. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer v6k6, Thank you for your valuable feedback. We address your concerns below. --- **Q1**: **A more solid explanation for Figure 8 is needed** **A1**: Thank you for your thoughtful comment. First, we would like to clarify that Figure 8 aims to show that our theory for adversarial training (AT) is compatible with standard training (ST), where the overfitting issue is far less than robust overfitting in AT. This is supported by the fact that both the accuracy and CAS of ST do not change as significantly as AT in the latter training stage. As for the overfitting in ST stated in [1], it may have other mechanisms like sample memorization [A], but we kindly note that such understandings are not within the scope of our study, since our theory focuses on particular properties of AT. We will add this discussion in our revision. [A] The Pitfalls of Memorization: When Memorization Hurts Generalization. ICLR 2025 --- **Q2**: **The potential use of cross-class features** **A2**: Thanks for the insightful comments. Due to space limitations, please kindly refer to our [response](https://openreview.net/forum?id=FvBYG5jA7k&noteId=ryP80kF51O) to Q1 by Reviewer HgFV for a discussion on the future directions of cross-class features. --- **Q3**: **Regarding catastrophic overfitting** **A3**: Thank you for the insightful comment. Following your suggestion, we implemented a 200-epoch fast adversarial training with 1 step PGD attack, $\ell_\infty$-norm $\epsilon=8/255$ on PreActResNet-18 (same as the main observation experiment in our paper). The CAS for different stages is presented as follows: | Epoch | 50 | 100 | 150 | 200 | | --- | --- | --- | --- | --- | | CAS | 16.4 | 18.5 | 7.3 | 2.8 | | Robust Accuracy (PGD-10) % | 38.1 | 40.9 | 27.4 | 0.0 | The results also validate that the forgetting of cross-class features (measured by CAS) can be detected in the catastrophic overfitting of fast adversarial training, which aligns with the main discovery of our paper. We will include these results and related code, figures in our revision. Thanks again for raising this point! --- **Q4**: Class names displayed in the figures **A4**: Thanks for the kind suggestion. We will add these class labels to the saliency maps in our revision. --- **Q5**: Please ensure consistency in notation. **A5**: Thanks for the careful reading. We will unify them as $A_i$ in our revision. --- We truly appreciate your valuable and constructive feedback. If you have any further questions or concerns, please let us know. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their detailed response. I have reviewed the additional experiments and discussions provided in response to the reviewers' comments. Overall, I appreciate the authors' thoughtful engagement with the feedback. In particular, the extended discussions—such as the Response to Q1 by Reviewer HgFV and the section Regarding Catastrophic Overfitting—significantly strengthen the contribution of the paper. I believe these insights are essential and should be included in the main paper. However, it would have been even more impactful if the discussion on the potential applications of the proposed method (Response to Q1 by Reviewer HgFV) had been included in the initial submission. Doing so could have further advanced the community’s understanding of adversarial robustness. For these reasons, I will maintain my current score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer v6k6, Thank you for your further response! We truly appreciate your acknowledgment that our rebuttal significantly strengthens our paper's contribution. We will definitely incorporate all of these extended discussions into the camera-ready version of our paper if accepted, especially the section regarding Catastrophic Overfitting and the future potential of cross-class features. Thank you once again for your suggestions, which are invaluable for strengthening the contribution of our work. Sincerely, Submission 6293 Authors
Summary: While successful at defending models against adversarial examples, the dynamics of adversarial training (AT) are poorly understood. This paper attempts to explain two properties of AT: robust overfitting, and the utility of soft labels over one-hot labels. These properties are studied through the lens of cross-class features, which are features used by the model for classification which are shared by multiple classes. It is shown that a well-fit robust model displays significant correlations between features for different classes; however, when the model is overfit, these cross-class features largely disappear. It is hypothesized that this contributes to the increase in test loss that is characteristic of robust overfitting. Furthermore, it is hypothesized that soft-label methods like knowledge distillation prevent the loss of cross-class features and therefore mitigate robust overfitting. Additional experimental results are provided to support these hypotheses. A theoretical model for adversarial training is thes described which illustrates the utility of cross-class features in robust classification. Claims And Evidence: I think the major claims made in this paper are backed up by the evidence presented. Methods And Evaluation Criteria: I think the methods and evaluation are largely consistent with the goals of the paper, but some minor issues I have are included in the strengths and weaknesses section below. Theoretical Claims: I didn't read the proofs in detail, but I did not find any obvious errors. Experimental Designs Or Analyses: I think the experimental design and analyses presented here are sound. Multiple datasets, architectures, adversarial attacks, and training methods are tested. However, many implementation details are missing, and I don't see a link to a repository with code, so it might be difficult to replicate the experiments. Supplementary Material: I briefly looked over Appendices A, B, and C. Relation To Broader Scientific Literature: This paper studies the problem of adversarial training, which is the focus of a large body of work, and looks specifically at the phenomenon of robust overfitting. This paper connects robust overfitting to the model's use of robust features, discussed in Tsipras et al. and Ilyas et al., and makes use of feature similarity and visualization techniques to explore this connection. Essential References Not Discussed: I am not aware of essential references that are not discussed here. Other Strengths And Weaknesses: Strengths - The description of cross-class features is novel and, according to the experimental results presented, appears to at least partially explain the phenomenon of robust overfitting. - The feature attribute correlation matrices are a very clear visualization of the phenomenon being described. - The hypotheses presented are backed up by both experimental and theoretical results. - The insights from this paper could help inspire further advances in adversarial training. For example, new knowledge distillation methods could be designed that better encourage the maintenance of cross-class features. Weaknesses - I find the saliency map visualizations to be unconvincing. It's not clear from figure 3 that these are not cherry-picked examples, and it's difficult to assess the rigor of the claims made in that paragraph. - The theoretical data model in Section 4 is fixed to 6 dimensions. While the results support the hypotheses posed in previous sections, there isn't any justification as to why we would expect these results to hold for different dimensions. - The term "feature" feels overloaded in different sections of this paper. In section 3.1, features are defined to be the activations of a feature extractor, while in the saliency map experiments a feature is some abstract quality of the input (i.e. the wheels on a car), while in section 4 a feature is a column in the dataset. I think it would benefit the paper to have a concrete definition of what a feature is in this context (and in particular, whether features are model-dependent). Other Comments Or Suggestions: The paper is well written, I didn't find any typos. Questions For Authors: 1. What was the reason for choosing a 6-dimensional data representation? At the moment, I'm assuming it's just because that's the smallest dimension with an interesting cross-class feature structure. 2. Would any complications arise when trying to extend the theoretical results to higher dimensions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer oCW6, Thank you for your valuable feedback. We address your concerns below. --- **Q1**: Implementation details **A1**: Thank you for your careful reading. We are committed to publishing our code upon publication. For implementation details regarding model training, we utilize the default hyperparameters according to a well-known adversarial training repository [1] listed below: | Parameter | | | --- | --- | | Train epochs | 200 | | SGD Momentum | 0.9 | | weight decay | $5\times 10^{-4}$ | | initial learning rate | 0.1 | | learning rate decay | 100, 150-th epochs (decay rate=0.1) | | training adversary | 10-step PGD | We will make these details clear in our revision. Thanks again for the reminder! [1] Bag of Tricks for Adversarial Training, ICLR 2021, https://github.com/P2333/Bag-of-Tricks-for-AT. --- **Q2**: Saliency map visualizations. **A2**: Thank you for your thoughtful comment. To show these examples are not cherry-picked, we will include a full page of visualization examples (ordered by original sample ID) in the appendix of our revision, where many saliency maps of these examples still exhibit such properties. However, we acknowledge that **not all** samples enjoy such clearly interpretable features (e.g., *wheels* shared by *automobiles* and *trucks*), since features learned by neural networks are subtle and do not always align with human intuition, including cross-class features. Thus, we will tone down related claims made in that paragraph and highlight that the saliency maps are only presented to help understand the concept of cross-class features. --- **Q3**: The theoretical data model in Section 4 is fixed to 6 dimensions. What was the reason for choosing a 6-dimensional data representation? Would any complications arise when trying to extend the theoretical results to higher dimensions? **A3**: Thanks for the thoughtful comment. First, we acknowledge that using a 6-dimensional data representation is the smallest dimension for illustrating our insights into cross-class features, since binary classification is not capable of handling cross-class features as they do not influence the binary classification result. Therefore, we explored a ternary classification problem with minimum dimensions in our framework. Following your suggestion, we attempted to extend our theory to more feature dimensions for ternary classification . For each original feature $x_{E,j}$ or $x_{C,j}$, we can extend them to $x_{E,j}^k$ or $x_{C,j}^k$, where $k=1,2,\cdots,K$, thus resulting in $6K$ feature dimensions. Accordingly, we also have corresponding parameters $w_1^k$ and $w_2^k$ for $k=1,2,\cdots, K$. Based on this extended model, we can derive similar results in Theorems 1-3, where the bounds are set for $w_1^k$ and $w_2^k$. The proof sketch includes deriving the optimal perturbation $\epsilon$ with Lemma 2, calculating the optimizing objectives like equation (26), and finally derive the solution of $w_1^k$ and $w_2^k$ for minimizing the objectives like equation (27) and (29). Due to space limitations, we are unable to present all proof details here, but we will include this extension in the appendix of our revision. Thank you again for the valuable suggestion! --- **Q4**: The term "feature" feels overloaded in different sections of this paper. **A4**: Thank you for your careful reading. We clarify the definition of “features” as the activation in a particular layer (feature extractor), as stated in Section 3.1. Thus, the feature is model-dependent under this definition. Regarding saliency maps and Section 4, we apologize for the potential confusion and will replace the term “features” as follows. For saliency maps, we will use the term “class-specific discriminative regions” that was used in the original paper of [2] GradCAM for the highlighted regions. For $x_{E,j}$ and $x_{C,j}$ in Section 4, we will define them as “attribution” instead of features to distinguish them. Thanks again for your kind reminder, we will revise these terms in our revision. [2] Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. IJCV 2019 --- We truly appreciate your valuable and constructive feedback. If you have any further questions or concerns, please let us know.
Summary: This paper proposed a novel perspective to understand adversarial training. By splitting features into cross-class features and class-specific features and investigating model learning behaviors on cross-class features, this paper demonstrated the importance of cross-class features in improving model robustness. Based on that, this paper also provided an interpretation of robust overfitting and the advantage of soft-label in adversarial training based on cross-class features. Claims And Evidence: The main hypothesis of this work is that cross-class features are very helpful for improving robust generation of model trained by adversarial training. However, during adversarial training, the model only learns cross-class features at the initial stage and gradually ignores these features after some checkpoint. Authors conduced both theoretically analysis and empirically studies to support this hypothesis. Methods And Evaluation Criteria: This work utilized benchmark datasets to conduct empirical studies and proposed a new metric Class Attribution Similarity (CAS) to evaluate the usage of cross-class features during adversarial training. Theoretical Claims: I quickly went through proofs provided in the supplementary material. Experimental Designs Or Analyses: Both theoretical analysis and empirical studies strongly support authors' hypothesis about the role of cross-class features played in adversarial training. Supplementary Material: Yes, I read the algorithm and experimental results provided in supplementary material. Relation To Broader Scientific Literature: This work provided a novel perspective to understand adversarial training, which might be able to inspire future works to develop more advanced adversarial training based method to improve model robustness. Essential References Not Discussed: No. Other Strengths And Weaknesses: Pros: 1. This work provided a novel perspective to understand adversarial training and presented a hypothesis about it by investing cross-class features. 2. Both theoretical analysis and empirical studies strongly support authors' hypothesis and also explain the robust overfitting problem and the advantage of soft-label in adversarial training. Cons: 1. It would be better if authors could discuss, based on the findings in this work, any possible ways to develop advanced adversarial training methods. Other Comments Or Suggestions: No. Questions For Authors: 1. As authors' claimed, after some checkpoint, the trained model reduce its reliance on cross-class features, which result in the reduced robustness on test data; so could we understand this phenomenon in another way, i.e. some class-specific features are naturally contradicted with cross-class features, as cross-class features are helpful for model robustness, the robustness of model will be reduced if model starts learning those contradicted class-specific features after some checkpoint. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer HgFV, Thank you for your valuable feedback. We address your concerns below. --- **Q1**: It would be better if authors could discuss, based on the findings in this work, any possible ways to develop advanced adversarial training methods. **A1**: Thank you for the thoughtful comment. We list some potential directions for further applications of our theory as follows: - **Data (re)sampling**. Since more generated data is prone to be helpful for advancing adversarial robustness [1], it requires significantly more data and computational costs. From the cross-class feature perspective, adaptively sampling generated data **with considerations of class-wise relationship** may improve the efficiency of large-scale AT and decrease the forgetting of cross-class features. - **AT configurations**. Customizing AT configurations like perturbation margins or neighborhoods is useful for improving robustness [2]. Since cross-class features are more sensitive against robust loss, customizing AT configurations for different samples or classes **based on class-wise relationships** may mitigate this sensitivity and further improve robustness, as shown in Theorem 1. - **Robust module design:** The architecture of a model and the mechanisms of activation play a crucial role in adversarial robustness [3]. Therefore, designing modules that either implicitly or explicitly emphasize cross-class features may enhance robustness. For example, calibrating channel activation can improve robustness [4], thus creating activation mechanisms that preserve more cross-class features can further contribute to this improvement. In addition to these AT algorithms, we would also like to highlight the **theoretical modeling potential** of our work. Similar to the robust/non-robust feature decomposition [5], which has been applied in many subsequent theoretical works, e.g. [6,7,8], our cross-class feature model has the potential for more in-depth modeling of adversarial robustness, contributing new tools in its theoretical analysis. [1] Better Diffusion Models Further Improve Adversarial Training. ICML 2023 [2] CAT: Customized Adversarial Training for Improved Robustness. IJCAI 2022 [3] Robust Principles: Architectural Design Principles for Adversarially Robust CNNs. BMVC 2023 [4] Improving Adversarial Robustness via Channel-wise Activation Suppressing. ICLR 2021 [5] Adversarial examples are not bugs, they are features. NeurIPS 2019 [6] On the Tradeoff Between Robustness and Fairness. NeurIPS 2022 [7] Understanding the Impact of Adversarial Robustness on Accuracy Disparity. ICML 2023 [8] Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data. ICLR 2025 --- **Q2**: As authors' claimed, after some checkpoint, the trained model reduce its reliance on cross-class features, which result in the reduced robustness on test data; so could we understand this phenomenon in another way, i.e. some class-specific features are naturally contradicted with cross-class features, as cross-class features are helpful for model robustness, the robustness of model will be reduced if model starts learning those contradicted class-specific features after some checkpoint. **A2**: Thank you for the alternative interpretation of our theory, which aligns well with our understanding. There may indeed be some contradiction between cross-class and class-specific features, caused by various factors such as model capacity or feature overlap. As a result, learning more class-specific features for lower training robust loss can lead to a decrease in cross-class features, as discussed in Section 3.2. Thank you once again for your insightful comment; we will incorporate this discussion into our revision. --- We truly appreciate your valuable and constructive feedback. If you have any further questions or concerns, please let us know.
null
null
null
null
null
null
Ad-Hoc Human-AI Coordination Challenge
Accept (spotlight poster)
Summary: The paper introduces the AH2AC2 to evaluate human-AI teamwork in Hanabi, a cooperative card game. Key contributions include: - AH2AC2 Framework: A standardized benchmark using human proxy agents (trained via behavioral cloning + RL) as evaluation partners, hosted via a controlled API. - Open-Source Dataset - Human Proxy Agents: Combining BC with KL-regularized RL, achieving robust performance while retaining human-like behavior. - Baselines: Evaluation of zero-shot (e.g., OBL) and data-driven methods (e.g., BR-BC), revealing gaps in integrating limited human data. Main results show OBL outperforms human-data-dependent methods, highlighting challenges in leveraging small datasets. Proxies exhibit human-like behavior via metrics (e.g., IPP, Communicativeness) but lack direct human validation. Claims And Evidence: Claims are generally supported, however there are some concerns: - Proxy human-likeness is validated via cross-play with BC, action prediction, and behavioral metrics. However, direct comparison with real human players is absent, weakening claims about human compatibility. Also, overtime, the human behavior may change while the proxy agents will be fixed. - Superiority of regularized RL over BC is evidenced by improved self-play scores (Table 2). - Challenge utility is demonstrated via baseline evaluations (Table 5), though reliance on synthetic proxies (not humans) limits real-world applicability claims. Methods And Evaluation Criteria: - Hanabi is appropriate for testing coordination under partial observability. - KL-regularized RL is a sensible approach to balance human imitation and robustness. - API-hosted proxies prevent overfitting, but the limited open dataset (1k games) may restrict research scope, thus the same concern as closed existing evaluation methods would apply to the proposed benchmark. - Three-player evaluation is underexplained given data scarcity (46k games vs. 101k for two-player). Theoretical Claims: No new theoretical claims. KL regularization builds on prior work. Theoretical analysis of HDR-IPPO is deferred to future work. Experimental Designs Or Analyses: - Self/cross-play and action prediction tests are sound but rely on proxy-human comparisons which could be limited in terms of generalization similar to human players. It would be worthwhile to evaluate with human players as well and compare the results. - Behavioral metrics (IPP, Communicativeness) are validated against a human dataset (Table 4), but metrics are narrow. The paper validates human proxy agents using IPP and Communicativeness, which measure strategic aspects of gameplay (e.g., hint frequency and card knowledge). However, these metrics narrowly focus on specific behaviors, omitting critical dimensions of human-like coordination. For instance, they don't assess risk management (e.g., balancing safe plays versus strategic gambles), temporal adaptation (adjusting strategies as the game progresses), or error recovery (recovering from misplays through ad-hoc coordination)—key facets of human adaptability. Additionally, while prior Hanabi research incorporates broader metrics like hint utility (hint effectiveness) or convention compliance (adherence to implicit rules), the current analysis lacks these, limiting claims of comprehensive human-likeness. Furthermore, the reliance on quantitative metrics overlooks qualitative aspects such as trust or intent alignment, which could be captured through human evaluations (e.g., surveys or A/B testing). Expanding the evaluation to include these dimensions would strengthen assertions about behavioral fidelity and better reflect the complexity of human teamwork. - Ablation study (Appendix A.8) on regularization strength is critical but not detailed in the main text. I believe it's worth discussing this in more detail in the main paper. Supplementary Material: Appendices cover data splits, hyperparameters, and ablation study. Relation To Broader Scientific Literature: Connects well to Hanabi research (e.g., SPARTA), zero-shot coordination (OBL), and ad-hoc teamwork. Missing recent LLM-based coordination studies (e.g., theory of mind in LLMs), though mentioned in future work. Essential References Not Discussed: No critical omissions noted. Other Strengths And Weaknesses: - Strengths: Practical framework, reproducible evaluation, actionable insights (e.g., data-efficient methods needed). - Weaknesses: Limited human validation, sparse three-player data, incremental methodology (BC+RL), limited to only one environment. Other Comments Or Suggestions: In Section 2, discount factor is said to be in range $[0,1]$. How extreme different values affect the results, eg $\gamma = 0$ vs $\gamma =1$? Minor: in line 163, authors should use `citet`. Same should be applied in several places in the Appendix. Questions For Authors: - Human Validation: How do proxies perform when paired with real humans? If untested, how confident are you in their human-likeness? And how can this be evaluated? - Three-Player Data: Why prioritize three-player evaluation given limited data? Does data scarcity bias proxy behavior? - LLM Integration: The conclusion mentions LLMs. What are the plans to benchmark LLM-based agents in AH2AC2? - Since this is a benchmark paper, it's of utmost importance to keep it updated and maintaining the projects. What are the promises and plans by the authors to do so? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear reviewer, Thank you for the detailed and constructive feedback. We appreciate the opportunity to address your points and clarify aspects of our work. Please find our answers and improvements below. ## Addressing Reviewer Questions ### Q1. Human Validation We agree that testing human proxies with real humans would be ideal for a human-AI benchmark, but large-scale evaluations are impractical due to logistical challenges and resource constraints. Therefore, we created these proxies using a method studied extensively in prior research, and our experiments and tests confirm they exhibit **human-like qualities** and behave as expected. Overall, our validation shows that the proxies capture key aspects of human play, making them suitable and robust partners for the benchmark. ### Q2. Three-Player Data We thank the reviewer for this excellent question! We collected data for two-, three-, four-, and five-player games; our analysis showed that proxies for four- and five-player settings gave less convincing results due to data sparsity. In contrast, the **three-player setting, with 46k games**, showed strong performance (comparable to the two-player setting), and tests on data subsets revealed diminishing returns with more data. This evidence suggests that the 46k games available for the three-player setting are sufficient to train robust, human-representative proxies, making it a valuable and worthwhile addition to the benchmark. ### Q3. LLM Integration Thank you for your suggestion. To strengthen our benchmark and given the popularity of LLMs, we have implemented an LLM-based agent using **o3-mini** and are currently benchmarking it with **AH2AC2**. So far, we're getting scores ranging from **14 to 20 out of 25** (along with some games where all lives are lost), and we'll include these results in the final paper copy, along with additional experiments and results. We believe this is a strong addition to our paper and hope the reviewer will recognise the effort and resources we invested into benchmarking LLMs. ### Q4. Benchmark Maintenance We are committed to the long-term success of the AH2AC2 benchmark. We will maintain and update the evaluation website, submission API, and leaderboard, offer ongoing support, and expand the benchmark, potentially adding four and five-player settings and other Hanabi variants as we collect more data and different techniques emerge. ## Addressing Other Comments **Fixed proxies and dynamic behavior:** Thank you for raising this point. Although our agents’ parameters are fixed after training, their in-game behavior is dynamic – _fixed parameters do not imply fixed behaviour_. Due to space constraints, please see our response to reviewers CEGr and FDiM. Action prediction results (Tables 3 & 4) confirm high accuracy/low loss on unseen human data, indicating they capture diverse strategies and adapt in-game. **Challenge utility:** While we agree that human-AI play is ideal and we acknowledge this limitation, large-scale human testing is often impractical and hard to reproduce. Our approach offers a practical alternative: we create robust, reproducible proxy agents trained on extensive real human gameplay data using _SOTA methods_ for developing human-like policies. **API-hosted proxies:** Thank you for this comment. We believe AH2AC2 improves on previous methods that used completely inaccessible proxies and closed datasets. Our API provides _open_, but _controlled, pre-registered access_ to human proxies, ensuring fair, reproducible evaluation. **Behavioral metrics:** Thank you for your suggestions regarding the breadth of our behavioural evaluation. Our experiments show that BC agents trained on the entire dataset achieve better coordination when paired with our human proxies (*Figure 2*), proving that our proxies adapt and recover from errors even with fixed BC agents. Low **IPP** means hints are effective (they carry implicit info that players must interpret), and our action prediction results (*Tables 3 & 4*) confirm the proxies learn a wide range of implicit conventions. We couldn’t find standardised definitions for the extra metrics like *hint utility* or *convention compliance* in prior Hanabi works, so we'd be grateful if you could point us to any. We believe our current analysis provides substantial evidence of human-like behavior, and we're open to expanding our evaluation with suitable metrics. **Ablation study:** We appreciate the time you invested in reading the appendix. We completely agree that it should be present in the main text. We will integrate key findings into the main text in the camera-ready version, using an extra page allowance. --- We hope these clarifications and planned revisions address your concerns and strengthen our paper. Thank you for your valuable feedback. if you feel our responses have sufficiently resolved your concerns, we kindly ask you to consider updating your score accordingly.
Summary: This paper trains a human proxy model from human gameplay records on the Hanabi game and proposes that the proxy model can be used as a cheaper evaluation for algorithms developed for human-AI coordination. They also open-sourced a smaller human dataset on Hanabi. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: This work does not include proofs. Experimental Designs Or Analyses: Yes, I reviewed the experiments in Sec 5. Supplementary Material: I read A1, A2, A3, A4, and A6. Relation To Broader Scientific Literature: In human-AI coordination, a good human proxy model is important to evaluate the performance of trained agents. This work provides human proxy models trained from a large human dataset, open-sourced a human dataset, and built a public evaluation platform, which are helpful for future studies on human-AI coordination problems. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: This paper provides crucial components for human-AI coordination evaluation. The analysis of the human proxy model is comprehensive. The benchmark is a good evaluation for previous human-AI coordination algorithms. Weaknesses: Since the main purpose is to evaluate human-AI coordination, the most important aspect should be whether the model can reflect real human evaluation results. However, this paper dose not have human study results to show whether the performance against real humans aligns with the performance against human proxy models. Other Comments Or Suggestions: No. Questions For Authors: 1. Humans are known to be diverse. How do you model different strategies with only two proxy models? It is mentioned in line 812 that the dataset ``adhere to H-Group Conventions’’. Does that constrain the strategy coverage of the dataset? 2. Compared to neural policies, humans are usually much more adaptive. Does that apply to the human dataset in Hanabi? If so, is adaptation captured in the human proxy model? 3. The numbers in table 4 are close, but (since it is not ``normalized’’) it is hard for me to directly see why it shows the behaviors are close. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, Thank you for your positive feedback and detailed review of our paper. We especially appreciate you taking the time to read the appendices. We hope the following answers clarify the points you raised. ### Question 1 We thank the reviewer for raising this point, and we agree with the point that human play is diverse. Regarding the **H-Group Conventions** mentioned (line 812), we want to clarify what this means in practice and how it relates to strategic variety. While our dataset mainly features games where **H-Group Conventions** are used, this term doesn't refer to a single strategy. Instead, it is helpful to think of **H-Group Conventions** as a collection of different strategies and techniques that players learn and combine. In fact, conventions are sorted into levels. Each level uses a different set of strategies that are used. Mastering even beginner H-Group levels requires learning several of these. Players often mix and adapt these strategies within a single game – depending on the player's strength. Therefore, the dataset itself naturally contains a variety of playstyles. Our action prediction results (Table 3 & Table 4) show that the trained proxies achieve good accuracy and low cross-entropy loss on unseen human data. This suggests that they have successfully learned to represent multiple strategies and patterns present in the human dataset. We hope this explanation clarifies that the use of **H-Group Conventions** as a foundation does not overly constrain the strategic variety present in our data, and consequently, in what our proxy models represent. Finally, as a similar concern was raised by multiple reviewers, we will clarify these points in our camera-ready submission. ### Question 2 Adaptation is a crucial aspect to consider when developing human proxies, and we thank the reviewer for raising this point. To clarify how our proxies address adaptation, we believe it is helpful to consider two adaptation types: * **Adaptation within a game:** This refers to adapting actions based on the current game state, partner's moves, and available information within a single game. Our human proxies exhibit this type of adaptation. While the parameters of the neural network policy are fixed after training, behaviour adapts dynamically, much like a human player reacts to changing circumstances. When the proxy agents see an unexpected action or observation, then from that action-observation history onward, they will account for the fact that the other agent uses a different convention. Thus a fixed policy does not imply fixed behaviour. * **Adaptation between games/learning new strategies over time:** This means changing fundamental strategies or learning entirely new conventions. Our proxy models are not designed to do this. We want them to consistently represent the playstyle of the population found in our dataset, providing reproducible and consistent partners for evaluation. Furthermore, our experiments provide evidence for the proxy's within-game adaptability. As shown in Figure 2, BC agents (which can be quite rigid and brittle) perform better when paired with our human proxies compared to SP. This suggests that our proxies are flexible enough within a game to coordinate effectively even with simpler partners and can adjust to mistakes made by these less sophisticated policies. We hope this clarifies the specific type of adaptation our human proxy models capture and why they are designed this way for consistent evaluation. ### Question 3 We are happy to emphasize why we choose our metrics and how we intend them to be interpreted. - **IPP (Information per Play)** is normalised to a scale of 0 to 1. A value of 0.44 means players, on average, know slightly less than one “information” (color or rank) about the cards they play. - **Communicativeness** is defined as a percentage of turns where a player gives a hint, when it's possible to give a hint. The fact that the values for these behavioural metrics are close between different models suggests that they exhibit similar tendencies regarding information usage, hint efficiency, and communication frequency. We aimed for these metrics to provide a clear comparison. Perhaps we misunderstood your concern regarding normalisation? If you could elaborate on what aspect feels unnormalized or difficult to interpret, we would be happy to provide further clarification. --- We hope these explanations address your questions. Thank you again for your valuable insights. If our responses have resolved your concerns, we would appreciate it if you would consider increasing the support for our work.
Summary: This paper proposes a new ad-hoc human AI co-ordination challenge using the game of Hanabi. The authors have trained a human proxy agent and have created a controlled benchmark environment for researchers to test their new ad-hoc coordination algorithms. Claims And Evidence: This is a benchmark paper. The main claim is that the human proxy agent approximates human game play. Authors have done several analysis to justify this claim. Methods And Evaluation Criteria: Check Q2 for my concern about the proposed evaluation criteria to measure the progress in Human - AI coordination. Theoretical Claims: There is no theory or proof in the paper. Experimental Designs Or Analyses: Yes. No complaints. Supplementary Material: I only did a quick skim of supplementary material based on their mention in the main paper. Relation To Broader Scientific Literature: The paper is well-positioned in the literature. Essential References Not Discussed: * It is worth benchmarking multi-task learning agent from Nekoei et al, ICML 2021. (https://arxiv.org/abs/2103.03216) * It is worth noting that the final future work mentioned in the paper has already been explored in this recent ICLR paper: https://openreview.net/forum?id=pCj2sLNoJq (of course this was after the ICML submission deadline) Other Strengths And Weaknesses: Strength: * Great first step towards designing algorithms for better human-AI coordination. Weakness: * Having only one human proxy agent is a weakness. Other Comments Or Suggestions: * In page 1, what is "broad-based progress"? * Setting up API access for evaluation is a great idea to avoid overfitting! Questions For Authors: Q1. The main premise of this paper is the collection of a large data set of two-player and three-player games. The authors have decided not to release the dataset and only release a small subset for the researchers to finetune with. However, it is not clear what will stop other researchers from scraping the dataset themselves or just collecting more episodes of human play and training an agent using that. Can you clarify whether this is allowed to participate in the leaderboard? Q2. The fact that there is only one human proxy agent is very limiting. In practice, the agent might have to deal with multiple humans and each human will have their own style and strategy. Why is it that making progress in this single proxy benchmark would lead to better human AI coordination? Q3. Why dont you have access to OBL weights for 3-player games? Did you try contacting the authors? Q4. Authors are testing OBL, OP, and FCP. Another relevant agent that can do better zero-shot coordination that is missing here is the multi-task learning agent from Nekoei et al, ICML 2021. It can also be considered as population based and I would like to see this agent benchmarked here as well. Q5. Is it right that you train 3 seeds for BC, BR-BC, and HDR-IPPO and then pick the best seed based on the validation set? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your thorough review and constructive feedback. We have carefully considered your questions and provided our responses below. **Q1. Benchmark Fairness** We thank the reviewer for raising this point and agree that it is theoretically possible for researchers to attempt scraping game data or collecting new human data independently, and believe this is a valid concern. To clarify, the core goal of AH2AC2 is to measure how well methods perform when trained *only* on a limited dataset we provide, and using scraped data would not be fair or allowed. Having in mind that scraping is theoretically possible, as correctly noted by the reviewer, we rely on community transparency. We will strongly encourage participants to make their training code publicly available and reproducible. Also, submissions on the leaderboard will indicate whether reproducible code has been provided (and link to the codebase will be added in this case). We will update our submission system to account for this! We hope this helps reduce reviewers' concerns. Finally, while possible, scraping game data is non-trivial, and crucially, there is currently no open-source dataset of human play available. We should also note that past benchmarks (such as Overcooked AI) relied solely on trust. We believe our approach offers a more rigorous and fair evaluation protocol – we aimed to reduce the chance of overfitting, but we can't exclude the possibility of unethical practices completely. We hope this addresses your concerns regarding data usage and the fairness of the benchmark. --- **Q2. Human Proxies and Benchmark Goals** We thank the reviewer for raising this important point. We will update the paper to clarify both the specifics of our agents and the goals of AH2AC2. In particular: 1. **Number of Agents:** We developed *four* human proxy agents (two for the 2-player setting and two for the 3-player setting), not a single one. We will make this clearer in the paper. 2. **Diversity of Playstyles:** You are right that real-world human play varies significantly. Our agents were trained on a large dataset of games acquired from the `hanab.live` community. Players on this platform generally follow H-group conventions, but this does not mean they follow a single strategy. Instead, H-group conventions are a collection of diverse strategies and techniques. Players mix and match these based on their skill and the game context, leading to significant variation in play style within the dataset our agents learned from. We agree this is not clear in our original paper, and we will update the paper to provide more details. Finally, we would like to clarify our goal for the benchmark. Our approach is pragmatic and mirrors a real-world use case: * There is a given (inherently meaningful) human population that an algorithm is supposed to cooperate with. * We assume it is possible to collect a small dataset from this population (as is often the case). * The task is then to develop a method that can do well with the source population while having access to only this small dataset. Success on our benchmark indicates an algorithm's ability to effectively learn cooperative strategies from limited, representative human data, which we believe is a key step towards building human-compatible agents/systems. We hope this explanation clarifies the nature of our human proxy agents and the practical relevance of our benchmark. --- **Q3. 3P OBL Weights** We thank the reviewer for reading our paper in detail and raising this point. We contacted several authors of the original OBL paper, and they informed us that they unfortunately no longer have access to the 3P OBL weights and the weights were not released with their original work. Re-implementing OBL in JAX is a complex task, and it is beyond the scope of our current work. --- **Q4. Multi-task Learning Agent** We definitely agree that lifelong learning and multi-task learning algorithms could be highly competitive in AH2AC2. However, given the substantial complexity involved in fully implementing and tuning such an agent, we consider this outside the immediate scope of our benchmark. That said, we encourage and welcome the broader research community to benchmark such methods within AH2AC2, as these promising techniques may significantly enhance human-AI coordination. --- **Q5. Seeds** This is correct. --- Regarding <https://openreview.net/forum?id=pCj2sLNoJq>: We thank the reviewer for highlighting this very relevant ICLR paper. We agree that this recently published method directly addresses one of our proposed future works and represents a valuable addition to our study. We plan to add this approach to our benchmark as soon as possible. --- We hope we have addressed the reviewer's comments and concerns. We are happy to discuss any of these points further. If our response has resolved your main concerns, we would be grateful if you would consider updating your score.
Summary: The authors present a new test-bench for Human-AI collaborative RL using Hanabi. Claims And Evidence: The main claim is the creation of a benchmark test which is present but with only one team's submissions. Thus the claims that the benchmark will impact the community or improve RL in general aren't supported. Methods And Evaluation Criteria: I'm not convinced the benchmark will work as described. The use of human proxies means that the learning objective is static, Hanabi was originally picked as a target for RL as it requires dynamic planning. Optimal Hanabi agents can update their strategies as they observer the other players. A static (even if complicated) player model doesn't have this property, ditto the human games. How does success on the leader-board show improvements in Human-AI collaboration? Theoretical Claims: N/A Experimental Designs Or Analyses: See Methods Supplementary Material: No Relation To Broader Scientific Literature: This paper claims to be building support for the literature to grow, but does not display new techniques or methods itself. I'm not sure ICML is the correct venue as a result Essential References Not Discussed: N/A Other Strengths And Weaknesses: The model training and evaluation looks good, but I think explaining why this paper is relevant to people outside Hanabi researchers would greatly aid the paper. The methods used to train the models do not appear to be novel, and releasing a testing procedure along with new models is also common. So, if the authors can explain the "original and rigorous research of significant interest to the machine learning community" component then this would be good paper, but I feel like I'm missing that. Other Comments Or Suggestions: See above Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your feedback. We appreciate the opportunity to address your concerns and clarify the contributions of our work. ## Adaptivity of Human-Proxy Agents You raised an important concern about whether using human proxies, trained using fixed parameters, sufficiently captures the dynamism necessary for Hanabi, where players adjust strategies based on their partners’ behavior. Critically, fixed neural parameters do not imply static behavior; our proxies condition on the history of the game, which includes partner actions. Specifically, when the proxy agents see an unexpected action or observation, from that action-observation history onward, they will account for the fact that the other agent is using a different convention. The proxies are trained using SOTA methods and on a diverse dataset of human gameplay data. Some concrete evidence that the proxies are capable of adaptivity includes: - **Action Prediction Results (Tables 3 & 4):** The proxies accurately predict human actions on unseen human gameplay data, which comes from players of various skill levels, who follow different levels of H-group conventions. Such accuracy requires the ability to infer from a given action-observation history what sort of conventions the human is following. - **Successful Coordination with Brittle Partners (Figure 2):** The proxies achieve high performance even when paired with behavioral cloning agents, which are known to be prone to suboptimal play. The proxies’ success even in such scenarios provides strong evidence that they make adaptations during gameplay to ensure effective coordination. We have further clarified this in the paper and hope this addresses how the proxies reflect dynamic adaptivity similar to human play. ## Real-World Relevance of AH2AC2 Success in human-AI collaboration requires the ability to robustly generalize cooperative behaviors from limited human data. As well, the literature requires a standardized testing protocol to rigorously measure algorithmic progress. AH2AC2 directly addresses both of these fronts: we release an experimental suite consisting of carefully validated human-proxy agents, trained on extensive human gameplay data, which serve as robust and reproducible evaluation partners. As well, we open-source a deliberately limited dataset of human games to encourage research specifically on data-efficient coordination algorithms. ## Novelty and Suitability for ICML We appreciate the opportunity to clarify our decision to submit this work to ICML. Benchmark creation is explicitly recognized by ICML as a meaningful and rigorous research contribution. In particular, our benchmark required: - **An Extensive Original Effort:** Large-scale human data collection, meticulous data cleaning, validation, and rigorous modeling using SOTA methods. - **Novel Evaluation Platform:** Developing reproducible human proxy models, carefully validating them, and establishing a standardized evaluation protocol for human-AI coordination in partially observable, cooperative environments. To the best of our knowledge, AH2AC2 is the first benchmark to explicitly assess human-AI coordination in complex, partially observable settings using realistic human-proxy agents. ## Conclusion In light of your review, we have modified the paper to make the broader significance of our work more clear. We believe a benchmark such as AH2AC2 is highly necessary for progress in human-AI collaboration, and is to date decidedly lacking in the literature. Please let us know how we can further improve the paper. Otherwise, if you believe these clarifications have benefited the work, we would appreciate a corresponding update in the score. Thank you for your continued engagement with the review process.
null
null
null
null
null
null
A New Rejection Sampling Approach to $k\text{-}\mathtt{means}$++ with Improved Tradeoffs
Reject
Summary: This paper presents a new fast adaptation of k-means++ algorithm. Claims And Evidence: The algorithm and results are clearly stated and match the claims in the paper. The only claim that is not supported is the advantages of the approach in this work compared to the baselines. Authors provide comparison with both Bachman and Cohen-Added results but it is not clear at all if they outperform any of them (the running time presented in this paper for Cohen-addad seems to be incorrect, see Cor 5.5). Methods And Evaluation Criteria: Yes. Theoretical Claims: It is not clear from the paper in what regimes their algorithm outperforms the baselines. The theoretical results seems to be correct. Experimental Designs Or Analyses: Yes. The experimental section is in a good shape, various databases are considered and the results are conviencing. Supplementary Material: No. Relation To Broader Scientific Literature: This paper is interesting in the field of k-means clustering. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - The paper is well-written. - The experimental section is convincing. - The problem studied is important, and improving the running time has been one of the main challenges in this field. Weaknesses: - The theoretical advantages of this work are not clear. - The novelty is below the expectation for this conference. Other Comments Or Suggestions: No Questions For Authors: Please address the concern above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer yGTT, Thank you for your thoughtful and constructive review. We address the concerns below: > (1) The running time presented in this paper for Cohen-Addad seems to be incorrect ... The statement in Cor 5.5 did not explicitly state the dependence on $k$ for the algorithm, so we decided to use the more explicit statement in section E.2. Our notation differs slightly in using $\eta \leftrightarrow \Delta$ and $\varepsilon^{-1} \leftrightarrow c^2$. > (2) The only claim that is not supported is the advantages of the approach in this work compared to the baselines ... The advantages of MCMC methods over the standard kmeans++ are well studied in the literature ([1],[2]), in being much faster while providing competitive seeding quality. So we use the MCMC method as our _"updated"_ baseline and design our experiments to show the advantages of our method over [2]. This is because as shown in [2], the $O(|X|kd)$ run-time of the standard kmeans++ quickly becomes prohibitive for large datasets. However, if the reviewer feels that including the comparison with standard kmeans++ will increase the readability of our paper, then we will be sure to include it in the final version. > (3) ... What regimes their algorithm outperforms the baselines ... As discussed in response (5) to reviewer X1mD, it is possible to construct cases where the standard kmeans++ outperforms both MCMC and our method. This corresponds to datasets with large $\beta$ values. In the small $\beta$ value regime (which we observed for several large datasets), both MCMC and our method outperform kmeans++. Within this regime, our method (RS-kmeans++$(\cdot,\cdot,\infty)$) will consistently outperform MCMC since it adjusts automatically to the $\beta$ values as opposed to MCMC, where an appropriate parameter dependent on $\beta$ must be provided. We also have another method where we provide a similar parameter as input, but even here we obtain provably better convergence properties as compared to the MCMC method. > (4) The theoretical advantages of this work are not clear $\dots$ We highlight our key contributions : 1._New trade-offs_. We present a method of performing kmeans++ seeding in time $\tilde{O}(nnz(X) + mk^2d)$ with the approximation guarantee having an additive and scale invariant $k^{-\Omega(m/\beta)} \operatorname{Var}(X)$. Note that this has an exponentially decaying dependence on $m$, improving upon previously known MCMC methods which attain $m^{-1} \operatorname{Var}(X)$ additive error (which is linearly decaying). 2._Automatic determination of the correct number of iterations_. Our approach which runs in time $O(\beta k^2d)$ does not require a pre-specification of the number of iterations of rejection sampling. Indeed, RS-kmeans++ is \emph{``smart"} in the sense that it will spend less on datasets that are not well-clusterable as opposed to kmeans++, which runs in the same time regardless of the dataset. 3._Theoretical Analysis of $\delta$-kmeans++_. We use a careful potential-based argument to bound the solution quality of $\delta$-kmeans++, which was not previously known. We believe that this may be of independent interest as well. 4._Fast data updates_. The particularly simple form of the probability distribution that we sample from can be exploited to support pre-processing in input sparsity and data updates in $O(\log |X|)$ time, which may be useful in scenarios where there are dynamic updates to the data being clustered. 5._Parallel setting_. Unlike the inherently sequential MCMC methods, our rejection sampling approach can be easily parallelized since each rejection sampling step is independent of others. We shall be glad to address any other concerns the reviewer may have.
Summary: The paper proposed a new seeding algorithm for kmeans clustering, accelerating the kmeans++ by leveraging rejection sampling techniques. The key idea is to select new centroids in a way that maintains probabilistic separation from existing ones (alike the kmeans++). Instead of explicitly computing the D2 distribution, which requires evaluating all data points, the authors introduce a data structure that explosit feature sparsity to enable efficient sampling. They proved that their method remains O(log k)-competitive, matching the theoretical guarantee of k-means++, while achieving up to 70x speed up over a prior MCMC-based seeding method. Claims And Evidence: The primary claims are supported by both theoretical analysis and empirical results. For minor problems, see below. Methods And Evaluation Criteria: However, I have concerns regarding the practical applicability of the proposed methods, particularly in two aspects: 1. Limited impact on overall clustering efficiency. Since the proposed algorithm optimizes only the seeding phase of clustering, its contribution to the total runtime improvement may be marginal. The primary computational cost in k-means stems from the iterative refinement steps, not the initialization. Therefore, the claimed speedup might be overstated. The authors should provide stronger justification for why accelerating the seeding step translates to meaningful performance gains in practical applications. 2. Dependence on data sparsity. The method appears to leverage feature sparsity to achieve its efficiency gains. However, many modern applications, such as image and text embeddings, involve dense vector representations. How does the proposed approach generalize to such settings? The authors should clarify its effectiveness in scenarios where data sparsity cannot be exploited. Theoretical Claims: The proofs appear to be correct under a basic check. However, I have some concerns: 1. Accounting for preprocessing time in total complexity The claimed time complexity does not seem to explicitly include the cost of the preprocessing step. In particular, the centering step (subtracting the mean from all points) inherently requires $O(|\mathcal{X}| d)$ operations, regardless of data sparsity. Since this centering operation is a necessary preprocessing step for the proposed method—but not for prior approaches—its cost should be accounted for in the overall complexity analysis. However, it appears to be omitted from the provided bounds. Experimental Designs Or Analyses: The analysis is rigorous, and the experiments are well-structured to support the authors’ primary claims. However, further clarification on the practical impact of the proposed method, particularly in the context of overall k-means efficiency, would strengthen the paper. Supplementary Material: I reviewed the programs in the supplementary materials and did not find an implementation of the claimed “sample and query access data structures.” Instead, the preprocessing appears to be implemented using a basic matrix data structure. If these specialized data structures are essential for achieving the claimed efficiency improvements, their absence raises concerns about the practical feasibility and reproducibility of the method. The authors should clarify whether these data structures were omitted, not required in practice, or implemented differently than described in the paper. Relation To Broader Scientific Literature: The contribution to the broader scientific literature is questionable, as the work focuses on a highly specific topic—accelerating a particular seeding algorithm used only in the initialization phase of k-means clustering. Given that k-means iterations typically dominate the overall computational cost, the impact of this improvement on general clustering applications remains unclear. The authors should better position their work within the broader context of clustering and machine learning to justify its significance beyond this niche problem. Essential References Not Discussed: Lastest progresses seem to have been included. Other Strengths And Weaknesses: The introduced adaptive trade-off between computational cost and solution quality is intriguing and has the potential for broader impact beyond k-means++ seeding. If similar techniques can be applied to other clustering or optimization problems, this approach could be valuable in balancing efficiency and accuracy in large-scale machine learning tasks. Other Comments Or Suggestions: There are a few ambiguous notations and possible typos: 1. Page 6, line 279 (RHS): Should $\lVert X \rVert$ be $\lVert \mathcal{X} \rVert $? This notation inconsistency appears in multiple places, including the appendix. The authors should clarify if this is a typo or an intended distinction. 2. Notation system clarity. For instance: - The difference between $\lVert \mathcal{X} \rVert$ and $|\mathcal{X}|$ is unclear without prior explanation. - The meaning of nnz($mathcal{X}$) (number of nonzero entries in $\mathcal{X}$ should be explicitly stated. $\mathcal{X}$ is a set, while nnz($mathcal{X}$) seems to denote the non-zero elements in the feature matrix. Questions For Authors: 1. Can the Lloyd iteration be accelerated with the proposed technique? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer f83S Thank you for your thoughtful and constructive review. We address the concerns below: > (1) Limited impact on ... There are many reasons to speed up the $D^2$-sampling-based seeding itself, and this has already been well addressed in the literature, for example (paragraph 3, introduction) of [1]. The reasons can be summarized as follows : 1.Extending the inherently sequential kmeans++ to the distributed setting is non-trivial while Lloyd's iterations is easily adapted in the MapReduce framework. 2.Many use cases require just the seeding - including components of clustering algorithms in the online / streaming / distributed setting. 3.Many theoretical results and practical algorithms for coreset constructions etc incorporate kmeans++ (as pointed out by reviewer 1rwo as well). Any speedup in kmeans++ seeding percolates to all of these downstream applications. > (2) Dependence on data sparsity. 1.We emphasize that the key advantage of our method is not due to using the sparsity present in the data but due to using rejection sampling for sampling from the $D^2$ distribution instead of computing it wholly. This also allows us to prove improved trade-offs. 2.The pre-processing time is proportional to the time it takes to just read the input, and the main loop for performing the seeding requires just ${O}(mk^2d)$ operations, which is independent of $|X|$. 3.If the data is indeed sparse and is given in a sparse representation, then our method is able to exploit that sparsity for faster pre-processing. 4.In the experiments as well, the speedups reported are solely due to the complexity of the main loop. We shall make this more explicit in the final version. > (3) Accounting for preprocessing time in total complexity Thank you very much for pointing this out. The way the pseudocode is written currently indeed takes $O(|X|d)$ time but can easily be done in $O(nnz(X))$ time as well. Hence, the time complexity mentioned for our method includes the pre-processing time as well. We will be sure to explicitly discuss this in the final version. We now describe how this can be done: Let the dataset be $X = \{x_1,\dots,x_n\}$. The mean of the dataset can be computed in $O(nnz (X))$ time as follows: We initialize $\mu$ to be the all-zero vector and upon seeing an entry $(i,j,v)$ perform the update $\mu(i) \gets \mu(i) + v$ and normalize afterwards. We also compute the norm $\|\mu\|$ in $O(d)$ time. This takes $nnz(X)+d$ updates. Now, to enable sampling from $D_X$, we need to compute the norm for each point of $X$. For each $x \in X$, we can compute $\|x\|^2$ in $nnz(x)$ operations and $\braket{x,\mu}$ in $nnz(x)$ operations as well, since $\sum_{i=1}^d x(i) \mu(i) = \sum_{i : x(i) \neq 0} x(i)\mu(i)$. Since $\|x - \mu\|^2 = \|x\|^2 + \|\mu\|^2 - 2\braket{x,\mu}$, we can compute all the norms in $\sum_{i=1}^nO(nnz(x_i)) = O(nnz(X))$ operations as well. In the sampling step, we compute the distance evaluations using the already computed mean $\mu$, so the complexity of the main loop remains $\tilde{O}(mk^2d)$. So, we need not explicitly center the dataset in $O(|X|d)$ time. We assume that the dataset is centered to make our proofs more simple. > (4) The authors should clarify whether ... The sample and query data structures are not necessary for implementing our method if the data is not sparse or dynamic updates are not required. The reason for including them is to show how the particularly simple form of the probability distribution that we sample from can be exploited to support pre-processing in input sparsity and data updates in $O(\log |X|)$ time, which may be useful in scenarios where there are dynamic updates to the data being clustered, in which case one does not have to re-run the pre-processing step. We leave the choice of implementing them as a design choice since the key speed-up is due to the main loop. > (5) Notation Issues $\|X\|$ denotes the frobenius norm $\|X\| = \sqrt{\sum_{x \in X} \|x\|^2}$, while $|X|$ denotes the number of data points in the dataset $X$. For a vector $x \in \mathbb{R}^d$, $\mathtt{nnz}(x)$ denotes the number of non-zero components of $x$. For the dataset $X$, we use $nnz(X) = \sum_{x \in X} nnz(x)X$. $\|X\|$ should be $\|X\|$ instead. (For the rebuttal we use $X$ instead due to space constraints) > (6) Can the Lloyd iteration ... No, our method specifically speeds up the seeding step of kmeans++, which, as discussed, is a useful case in itself for designing clustering algorithms. Since Lloyd's iterations are very sensitive to the initial centers chosen, the seeding step is important in practice to decrease the number of iterations required for convergence. We hope that we were able to address all concerns. We shall be glad to provide any further clarifications. [1] : Bachem, O., Lucic, M., Hassani, S. H., and Krause, A. Approximate k-means++ in sublinear time. AAAI 2016
Summary: The paper introduces rejection sampling, an alternative sampling method of k-means++ initialization. By making approximations, the method is able to make faster initial k clusters than its similar counterpart that uses MCMC based sampling on tasks where the number of datapoints is substantially higher than the number of clusters. The method seems exciting. But the paper needs some work in regards to presentation (see below). Claims And Evidence: The claims have extensive theoretical analysis and are verified with experiments. Methods And Evaluation Criteria: The datasets are adequate for their method comparison. Theoretical Claims: Yes I have checked the mathematical proofs on why the two sampling methods can replicate the k-means++ initialization with certain approximation guarantee. Experimental Designs Or Analyses: The designs are valid. However, the plot comparison with the original k-means++ should also be done with the MCMC method. Also, while they claim to show two methods, the comparison is only done on the second proposed sampling method. Supplementary Material: I have read through the figures and the texts. I also read the code but I was not able to run the code (via its readme file instructions). Relation To Broader Scientific Literature: The key contributions suggest a fast yet simple way of approximating k-means++. This could be very useful in scenarios where one is testing for the optimum number of clusters and have to run multiple runs of k-means on a single dataset with large number of points (e.g. brain function MRI data). Essential References Not Discussed: None. Other Strengths And Weaknesses: The strength of the paper is a mathematically sound yet simple way of approximating k-means++. K-means++ is often a time bottleneck when running multiple runs of k-means and this could be easily incorporated into open source packages such as scikit-learn or matlab codes to quickly test out its effectiveness. Weakness is, though reasons are explained in the appendix, the amount of comparison results. We can see that it provides faster results than the MCMC based method mentioned and compared against, but it is hard to convince readers to try the new method out in place of what they already have, despite the simplicity of the method. More end-to-end comparisons are needed with RS-k-means -> k-means are needed with reports of NMI, ARI and ACC that many readers are familiar with. Other Comments Or Suggestions: Figure A3 seems to be a duplicate of Figure 1, just with different order of the plots. Figure A3 also has a seemingly unintended black line. Same citation style is used for both in sentence citation and out of sentence citation, confusing the readers. The major concepts used in the Introduction are later described in the Preliminary section, which is an obstacle to reading the paper sequentially. Figure 1 has incredibly small fonts. The code is hard to run in its current state. Assumes Linux operating system, although the code is written in Python which is cross-platform. Table 7 not referenced in the document. Questions For Authors: Are there datasets that will be troublesome for RS-k-means++ but not for k-means++? For example datasets with only a few samples? Please elaborate. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer X1mD, We thank you for your thoughtful and constructive review and suggestions. We address the concerns below: > (1) The plot comparison ... should also be done with the MCMC method. As far as the convergence properties are concerned, the MCMC method converges to kmeans++ as well, and that is why we felt that it was sufficient to compare the solution cost computed by the standard kmeans++ method. However, we now think that it will be more informative to include the plot for MCMC, as suggested by the reviewer, in the final version as well. > (2) while they claim ... the comparison is only done on the second proposed sampling method. The RS-kmeans++ algorithm takes the parameter value $m$, which is the upper bound on the number of rejection sampling iterations allowed. This parameter controls the solution quality v/s computational cost trade-off. The two methods that we propose differ only on the setting of $m$. RS-kmeans++$(\cdot,\cdot, \infty)$ doesn't put a bound on the number of rejection sampling iterations and has the same $O(\log k)$ guarantee as kmeans++, while RS-kmeans++$(\cdot,\cdot,m)$ allows the user to control the trade-off. Experiment 1 shows the effectiveness of the approach when no upper bound is placed on the number of rejection sampling iterations. Due to the strong convergence guarantees which we prove, the behavior of RS-kmeans++$(\cdot, \cdot,m)$ approaches that of RS-kmeans++$(\cdot,\cdot,\infty)$. Experiment 2 is designed to test this, and it is observed that this convergence happens even for small values of $m$. Hence, we believe that the experiments are sufficient to show the overall effectiveness of RS-kmeans++. > (3) It provides faster results than the MCMC ... but it is hard to convince readers to try the new method ... despite the simplicity of the method. We have integrated a basic implementation of our algorithm as a Python library, which can be used along with popular packages like NumPy and Scikit-learn. We hope that this would help the community to easily try out our methods: https://anonymous.4open.science/r/RSkmeanspp-D240/. This integration allows users to try out our method instead of the MCMC method by simply changing a single line of code : `centers = rskmeanspp(X, k, m)` in place of `centers = afkmc2(X, k, m)`. Hence, we do not clearly see why someone would be less likely to try out our faster methods. Moreover, our methods have the added advantages of being able to do the pre-processing in input sparsity, supporting fast data updates and being easy to extend to the parallel setting. > (4) More end-to-end comparisons are needed ... with reports of NMI, ARI and ACC ... We emphasize that RS-kmeans++ is just a faster _implementation_ of kmeans++ seeding, the properties of which are well studied. If the reviewer still feels that including the reports of NMI/ARI/ACC would improve the readability of the paper, then we would be glad to include the reports or citations to relevant literature in the final version. > (5) Are there datasets that will be troublesome for RS-k-means++ but not for k-means++? ... It is possible to construct a pathological dataset that theoretically has a very large value of the $\beta$ parameter. For example, [1] constructs such an example where the inter-cluster distance is $\delta$ and intra-cluster distance is $\Delta$ where $\Delta \gg \delta$. Due to a large $\beta$ value, the theoretical runtime is slower than kmeans++ if $\beta \simeq \Delta/\delta \in \Omega(n/k)$. RS-kmeans++ can be thought of as a _''smart"_ algorithm, which spends less time on datasets that are not well-clusterable. However, we see that the $\beta$ values for real-life datasets are quite reasonable. Indeed, even for datasets with $n/k \sim 10^5$, we observe $\beta \sim 10$. Our method targets datasets that are very large, i.e, $n = |X| \gg k$, since then the runtime of the classic implementation of kmeans++ i.e, $O(nkd)$ becomes prohibitively large, while our main loop has only logarithmic dependence on $n$ i.e., $\tilde{O}(mk^2d)$ > (6) ... Assumes Linux operating system, although the code is written in Python, which is cross-platform ... The anonymized Python library pointed to in an earlier comment should take care of this. > (7) Figure A3 seems to be a duplicate of Figure 1 ... We included the plots so that the experimental section in the appendix is complete by itself. > (8) Presentation Issues We thank the reviewer for their suggestions and shall make sure that these are incorporated in the final version.
Summary: The paper gives a faster version for the $k$-means++ algorithm while maintaining the approximation guarantee offered by the original $k$-means++. They try to approximate the $D^2$ sampling used in $k$-means++. The authors first preprocess the data to center it and make a data structure that allows sampling from the centered data based on its norm. Now they sample data points from a distribution which is a weighted combination of $D^2$ and uniform distribution. The authors prove that their algorithm achieves same guarantees in less time. They empirically demonstrate the effectiveness of their algorithm on real datasets. Claims And Evidence: The claims in the paper appear valid to me. Paper is clearly written for the most parts. Methods And Evaluation Criteria: yes Theoretical Claims: I did not check the proofs in detail but at high level they look alright. Experimental Designs Or Analyses: Seems good. Supplementary Material: I had a cursory look at the supplementary section for the proofs. At a high level, the proofs appear correct. Relation To Broader Scientific Literature: $k$-means++ is a very popular algorithm in Machine learning. It is an essential ingredient of many other algorithms like coreset construction algorithms. This paper tries to speed up the algorithm while keeping its approximation guarantee. It would be of interest to the community. Essential References Not Discussed: There is a paper "Scalable $k$-means clustering via Light Weight Coresets" by Bachem et al. which also uses a distribution very similar to the one mentioned in this paper. It should be compared and contrasted with. The familiarity with that paper makes me question the novelty of this paper a bit. Other Strengths And Weaknesses: See the responses to other sections Other Comments Or Suggestions: See the responses to other sections Questions For Authors: Please address the comparison with the paper mentioned in Essential References Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 1rwo, > (1) Please address the comparison with the paper ... We thank the reviewer for pointing out [1] which we had missed out. Most importantly, we point out the difference in the distributions used by us and [1]. [1] uses the distribution $q(x) = 1/2|X| + \Delta(x,\mu)/2\Delta(X,\mu)$ where $µ$ is the data mean to construct a coreset of size $O(\epsilon^{-2}kd \log k)$ , while we use a _sequence_ of distributions given by $p(x) = (1−\delta)\Delta(x,S)/\Delta(X,S) + \delta/|X|$ (where $S$ is the currently sampled centers) to sample precisely k centers. Although the distributions are different, let us discuss the results of [1] since we missed that in our submission : [1] introduced the notion of _lightweight coresets (LWC)_, where the classical definition of a coreset is relaxed to allow for an additive error. It is shown that if $m = O(\epsilon^{-2}kd\log k)$ points are chosen with probability $q(x) := 1/2|X| + \Delta(x,\mu(X))/2\Delta(X, \mu(X))$ with weight $w(x) = 1/mq(x)$ to form a LWC $C$, then for any set of centers $S$, the following holds: $$ E[\Delta(C,S)] \leq \Delta(X,S) + 4{\epsilon} \Delta(X,\mu(X)) $$ To perform seeding, [1] first constructs a LWC $C$ and, second, applies the standard kmeans++ method to $C$ instead of $X$ (Section 8 of [1]). The total tun time is hence $$O(nnz(X)) + O(mkd) = O(nnz(X)) + O(\epsilon^{-2} k^2d^2 \log k)$$ Combining the above result with the $O(\log k)$ guarantee of kmeans++ gives the following, where $\Delta$ is the cost incurred by the LWC method : $$ E[\Delta] \leq 8 (\ln k +2) \Delta_k(X) + 4{\epsilon} \Delta_1(X) $$ Note that this is a weaker theoretical result as compared to the MCMC method of [2], where the same guarantee is obtained in time $O(nnz(X)) + O(\epsilon^{-1} k^2d)$ (as compared to $O(\epsilon^{-2}k^2d^2 \log k)$ of LWC). Since our approach improves upon [2], we have an improved trade-off as compared with [1] as well. Note that the goal of [1] is the _construction of LWCs_ in the distributed setting (the first part of their algorithm), while our goal is to speed up kmeans++ itself (the second part of their method). To obtain the seeding, the LWC method has to apply _some_ kmeans method (the kmeans++ algorithm in this case ) to the constructed coreset. Let us see some key elements of our work and [1] : 1. As discussed above, the LWC method for kmeans++ attains an approximation guarantee of $O(\log k) \Delta_k + \varepsilon \Delta_1$ in time $O(nnz(X)) + O(\epsilon^{-2}k^2d^2\log k)$, while our method attains $O(\log k) \Delta_k + k^{-\Omega(1 /\varepsilon \beta)} \Delta_1$ in time $O(nnz(X))+O(\varepsilon^{-1}k^2d)$. 2. Time complexity sublinear in $|X|$ after pre-processing is achieved in [1] by constructing an LWC, while we achieve this by using rejection sampling to sample from the $D^2$ distribution. 3. The trade-off in $[1]$ arises due to larger coresets being a better approximation for the complete dataset, while our trade-off arises due to the fact that more number of rejection sampling iterations are a better approximation to the actual $D^2$ distribution. 4. To quantify this advantage, we need to analyze the solution quality of a variant of kmeans++ which we call $\delta$-kmeans++, where the next center is sampled with probability $(1-\delta) \frac{\Delta(x,S)}{\Delta(X,S)}+ \delta \frac{1}{|X|}$, and precisely $k$ centers are sampled. Here, $\delta$ can be understood as the _``failure probability"_ i.e., the probability that all samples are rejected in $m$ iterations, in which case we sample a point uniformly at random. We use a careful potential-based argument to bound the error propagation in the solution quality due to using the weighted distribution. We were not able to find the required properties in any of the previous works, including [1], even though similar sampling distributions were being used. Hence, we believe that our contributions are conceptually and technically different from [1]. An advantage of $[1]$ is that the sampling step in the coreset construction can be implemented via a distributed protocol. We point out that our clustering algorithm can be extended to this setting as well. For simplicity, assume that we have a central node, and the dataset $X$ is distributed between $\ell$ nodes as $X_1,X_2,\dots, X_\ell$. It suffices to show how to sample from $D_X$. Each node $j$ computes $\|X_j\|$ and makes this available to the central node. The central node computes $\|X\|$. To sample a point from $D_X$, the central node chooses a node $j$ with probability $\|X_j\|^2/\|X\|^2$. Node $j$ then samples from $D_{X_j}$ and returns the point to the central node. The central node can then perform the rejection sampling procedure. [1]: Scalable and Distributed Clustering via Lightweight Coresets. Olivier Bachem, Mario Lucic and Andreas Krause. KDD 2018. [2]: Fast and provably good seedings for k-means. Bachem, O., Lucic, M., Hassani, H., and Krause, A. . NIPS 2016
null
null
null
null
null
null
Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models
Accept (poster)
Summary: This paper proposed the GCR framework, where a trie-based index leverages structured knowledge from KGs to address knowledge gaps and hallucinations in LLM reasoning. GCR employs a lightweight KG-specialized LLM for graph-constrained reasoning and a general LLM for inductive reasoning. Experimental results on KGQA benchmarks show that GCR achieves state-of-the-art performance, and demonstrates strong zero-shot generalizability to unseen KGs without additional training. ## update after rebuttal: Thank the authors for their detailed responses. The clarifications provided have addressed my initial concerns regarding the method's efficiency and its handling of special cases. As a result, I see no need to revise my original assessment score and will maintain it as is. Claims And Evidence: Yes. Methods And Evaluation Criteria: The motivation is clear and the proposed method is technically sound. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, extensive experiments have verified the effectiveness of the proposed method. Supplementary Material: Yes, I reviewed the appendix B. Relation To Broader Scientific Literature: The issues of knowledge gaps and hallucinations are significantly important in the current LLM area. The proposed faithful Graph-constrained reasoning may provide a new technical view to the community. Essential References Not Discussed: I do not have suggestions for additional references. Other Strengths And Weaknesses: Please check the above suggestions. Other Comments Or Suggestions: In my opinion, the efficiency of this approach is vital, and I am curious about the practical time costs of the KG-Trie construction stages on real-world KGs. Questions For Authors: How does your method adapt when real-world KGs have missing entities or questionable reasoning paths? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your positive and constructive review of our paper. Your feedback is invaluable in helping us refine and clarify our work. Below, we address your comments and questions in detail. ### Efficiency of KG-Trie Construction and Practical time costs of the KG-Trie construction We appreciate your emphasis on the efficiency of the KG-Trie construction process. In our current implementation, we optimize KG-Trie construction by leveraging efficient graph traversal techniques and parallelized batch processing. To provide a concrete perspective, we conducted additional experiments measuring the construction time on real-word KG (Freebase) in Appendix B.2, where the average time for KG-Trie construction is only **0.0133s.** We also present further optimizations to handle industrial-scale KGs by incorporating graph retrieval techniques to reduce the size of KG for Trie construction (Appendix B.3 and Figure 6). The breakdown of time consumption in Table 9 also shows the efficiency of the KG-Trie construction when combined with graph retrieval techniques (**0.2838s**). Future work could explore better trie structures or hierarchical indexing to improve efficiency on industrial-scale KGs. ### Adaptability to Missing Entities and Questionable Reasoning Paths Thank you for raising this important concern. Our method is designed to be robust against incomplete KGs in two folds: **Exploring Alternative Paths** When the entities are missing, the GCR can explore alternative paths to fulfill the reasoning. For example, for the question “What is the nationality of Joe Biden?” One of the reasoning paths could be: `Joe Biden -> born_in -> Scranton -> city_of -> USA`. When the entity `Scranton` is missing and leading to the disconnection of the path, GCR could explore other paths like: `Joe Biden -> profession -> President -> work_in -> Washington D.C -> city_of -> USA`. This path can also be used to deduce the correct answer: `USA`. In implementation, we explore the top-10 paths for comprehensive reasoning. **Combining Internal Knowledge of Powerful General LLMs** Due to the noises and incompleteness of KGs, the generated reasoning paths might be unsatisfying. To address this, in the graph inductive reasoning (Section 4.4), we employ a powerful general LLM (e.g., GPT-4o-mini) to incorporate diverse paths for deliberate reasoning. The powerful general LLM exhibits massive internal knowledge and strong inductive reasoning ability. Therefore, it could disregard the questionable paths and deduce final answers accurately. --- Rebuttal Comment 1.1: Comment: Appreciate the authors' thorough response which has addressed several of my concerns raised. Regarding the adaptability to missing entities, the provided examples in the response utilize path lengths of 4-6 hops. However, it is also claimed in Appendix that "When the hops are set to 3 or 4, the performance drops due to the increased complexity of the reasoning paths, which may introduce noise and make the reasoning less reliable." Does the existing architecture support case-specific path exploration with variable length parameters, and if so, through what mechanisms? If not, how would the author address this issue in the future? --- Reply to Comment 1.1.1: Comment: We are glad to hear that your concerns have been addressed, and we appreciate your insightful follow-up questions. First, we would like to clarify that the provided path examples have hop lengths of 2 and 3, respectively. The listed relations—`born_in, city_of, profession, work_in, city_of`—represent individual relations but are not counted separately as hops. The reasoning steps for the two examples are as follows: * **Example 1:** * **Hop 1:** `(Joe Biden, born_in, Scranton)` * **Hop 2:** `(Scranton, city_of, USA)` * **Example 2:** * **Hop 1:** `(Joe Biden, profession, President)` * **Hop 2:** `(President, work_in, Washington D.C.)` * **Hop 3:** `(Washington D.C., city_of, USA)` Regarding case-specific path length exploration, our current implementation does not support dynamic path length adjustments. However, this is an exciting direction for future work. One potential approach would be to **predict the complexity (or hop distance) of a query and dynamically adjust the path length parameters accordingly**. This could involve techniques such as **adaptive path selection, uncertainty estimation, or reinforcement learning-based exploration** to optimize reasoning reliability.
Summary: The paper introduces **Graph-Constrained Reasoning (GCR)**, a novel framework to address hallucination and knowledge gaps in large language models (LLMs) when reasoning over knowledge graphs (KGs). GCR bridges structured KG knowledge with unstructured LLM reasoning by constructing a **KG-Trie**, a trie-based index that encodes valid reasoning paths from the KG. During decoding, the KG-Trie constrains the LLM’s output to generate only KG-grounded paths, ensuring faithfulness. GCR employs a two-stage process: (1) a lightweight KG-specialized LLM generates multiple hypothesis answers and constrained reasoning paths via graph-constrained decoding, and (2) a powerful general LLM performs inductive reasoning over these paths to produce final answers. Experiments on KGQA benchmarks (WebQSP, CWQ) show state-of-the-art performance with **zero hallucination** and strong zero-shot generalizability to unseen KGs (e.g., ConceptNet, medical KGs). Key results include **92.6% Hit@1** on WebQSP and **75.8%** on CWQ, outperforming prior methods like RoG and ToG. The framework reduces latency by avoiding iterative KG traversal and leverages parallel decoding for efficiency. Claims And Evidence: 1. GCR eliminates hallucinations by constraining decoding to KG-grounded paths. Evaluations show **100% faithful reasoning ratio** (Table 5, Figure 5), with all generated paths verifiable in the KG. Ablation studies confirm that removing KG constraints leads to hallucinated paths (e.g., Case 1 in Table 5). 2. GCR achieves SOTA performance on KGQA tasks. Results on WebQSP (92.6% Hit@1) and CWQ (75.8% Hit@1) surpass retrieval-based (RoG: 85.7%) and agent-based (ToG: 68.5%) methods (Table 1). 3. GCR generalizes to unseen KGs without fine-tuning. Zero-shot tests on FreebaseQA (94% accuracy) and CSQA (91%) demonstrate adaptability (Table 6). Performance drops on MedQA (79%) are attributed to domain specificity. Methods And Evaluation Criteria: - **Methods**: - **KG-Trie** efficiently encodes KG paths as token sequences, enabling prefix-based constrained decoding. - **Graph-constrained decoding** uses a fine-tuned LLM (e.g., Llama-3-8B) to generate hypothesis answers and paths under Trie constraints. - **Inductive reasoning** with a general LLM (e.g., GPT-4) aggregates multiple paths for final answers. - **Evaluation Criteria**: - **Datasets**: Standard KGQA benchmarks (WebQSP, CWQ) and zero-shot tests on FreebaseQA, CSQA, MedQA. - **Metrics**: Hit@1, F1, accuracy. Faithfulness is measured via path grounding in KGs. - **Baselines**: Include LLM-only (ChatGPT+CoT), graph-based (ReaRev), and KG-enhanced methods (RoG, ToG). Appropriate for assessing faithfulness, efficiency, and generalization. Theoretical Claims: The paper lacks formal theoretical proofs but provides algorithmic formulations. Experimental Designs Or Analyses: - Confirm the necessity of both KG-specialized and general LLMs (Table 3). - Beam size (K=10) and path hops (L=2) are optimized (Figure 4, Appendix F.1). - GCR achieves **3.6s runtime** vs. 16.14s for ToG (Table 2), leveraging parallel decoding. - **Zero-Shot Generalization**: Tests on medical/commonsense KGs highlight domain limitations. - Soundness: Rigorous and reproducible. Supplementary Material: no Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: - **Strengths**: - Novel integration of KG structure into LLM decoding. - Practical efficiency (parallel decoding, pre-computable KG-Trie). - Strong empirical results and reproducibility. - **Weaknesses**: - Scalability of KG-Trie construction for billion-edge KGs. - Limited analysis of path diversity vs. answer correctness. Other Comments Or Suggestions: N/A Questions For Authors: Could longer reasoning paths (L > 3) improve performance on complex questions, and what are the trade-offs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your positive and insightful review of our paper. Below, we address your comments and concerns point by point. ### R1. Scalability of KG-Trie construction for billion-edge KGs. Scalability is indeed a vital concern, especially for billion-edge KGs. In our current implementation, we optimize KG-Trie construction by leveraging efficient graph traversal techniques and parallelized batch processing. Detailed analysis can be found in Appendix B.2, where the average running time for KG-Trie construction is **0.0133s.** We also present further optimizations to handle industrial-scale KGs by incorporating graph retrieval techniques to reduce the size of KG for Trie construction (Appendix B.3 and Figure 6). The breakdown of time consumption in Table 9 also shows its efficiency of when combined with graph retrieval techniques (**0.2838s**) Future work could explore better trie structures or hierarchical indexing to improve efficiency on industrial-scale KGs. ### R2. Analysis of Path Diversity vs. Answer Correctness We appreciate this suggestion and agree that path diversity plays a crucial role in reasoning quality. GCR first adopts the beam-search to generate top-K paths with KG-specialized LLM (Sec 4.3). Therefore, the path diversity would increase with the size of K. Then, we conduct graph inductive reasoning to incorporate diverse paths for deliberate reasoning with a powerful general LLM (Sec 4.4). As shown in Figure 4, we analyze the impact of different beam sizes K for graph-constrained decoding on the performance of GCR. We observe that the hit and recall of GCR increase with the beam size. Because, with a larger beam size, the LLMs can explore more diverse paths and find the correct answers. However, a larger K would increase the decoding time and diverse paths might introduce noises. Thus, we select the K to 10 to balance between the exploration and exploitation of the reasoning paths. ### R3. Impact of Longer Reasoning Paths (L \> 3\) on Performance We agree that extending path length could potentially enhance performance on complex multi-hop queries. We evaluate the performance of GCR with L up to 4 in Section F.1. The results in Figure 7 show that the performance in WebQSP slightly drops when L \> 2\. This could be because the question in WebQSP only requires up to 2 hops to solve \[1\]. Admittedly, a larger L could improve the performance on complex questions that require longer hop reasoning but the trade-offs would be the size of the KG-Trie. As shown in Figure 7 at Appendix F.1, the size of the KG-Trie is increased from 0.5MB to 7.5MB as the L change from 1 to 4\. This would lead to extra complexity for storage and reasoning. \[1\] LUO, LINHAO, et al. "Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning." ICLR 2024\.
Summary: This paper introduces a new approach called Graph-Constrained Reasoning (GCR), which integrates the structured reasoning capabilities of a KG-specialized LLM with the general reasoning abilities of a general-purpose LLM. GCR uses KG-Trie to encode potential KG reasoning paths, to constrain the KG-specialized LLM decoding process, ensuring each step is grounded by KG. The method outperforms previous methods on KGQA datasets and shows generalization ability across other KGQA benchmarks. Claims And Evidence: N/A Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths The motivation is clear and addresses a hallucination problem in KGQA tasks, while the method is technically sound and shows strong empirical performance. The conversion of the KG into a KG-Trie and introduction of the graph-constrained decoding are practical, as they do introduce not much time and space complexity in the reasoning process. Weaknesses Does the GCR method effectively handle filtering conditions, such as the query "Which state in the US has the largest population?" Or is it limited to relying solely on the knowledge of the powerful general LLM for such cases? The main results show a significantly lower F1 score compared to hit@1. In KGQA tasks, if the correct reasoning path is predicted, all answers are contained in the KG-Trie and they should all be retrieved. Therefore, the F1 score should not be much lower than hit@1. This discrepancy indicates that the KG-specialized LLM may be missing answers in KG due to its inherent knowledge limitations. In Section 5.3, the comparison between GCR and RoG only addresses the hallucination issue of whether the predicted paths exist in the KG, which is necessary but insufficient. A valid path should not only be present in the KG but also exhibit logical coherence. Is it possible to include an evaluation of the quality of paths generated by KG specialized LLM, assessing whether these paths truly represent logical reasoning processes for the given query? Other Comments Or Suggestions: As shown in Case 2 of Table 5, the answer generated by GCR’s reasoning path differs from the final answer, revealing that the KG-specialized LLM relies on its internal knowledge to answer KG-related questions. This creates conflicts between its internal knowledge and the KG’s knowledge, leading to inconsistencies or omissions in answers. These conflicts introduce new errors, which may affect the method’s overall performance. As mentioned in Section 4.2, GCR employs the shortest paths in BFS for training. Is this path always logical correct? There are cases where the training data paths may exhibit semantic or logical inconsistencies as below. To address this, could the authors compare these paths with the relationships specified in the original query to verify their accuracy and relevance? Query:Who is the brother of A's grandfather? Shortest Path(2 hops): A-> gender -> male -> gender -> Answer Reasonable Path(3 hops): A -> father -> C -> father -> D ->brother -> Answer The GCR method uses a small LLM to select path within the KG. However, since LLMs are inherently language models, the decoding process can be viewed as identifying semantically related edges, a task that may not align with the core capabilities of LLMs. Is the use of an LLM truly necessary for this purpose? Is the direction of edges within the KG considered in GCR method? For example, considering the query “what is the continent of the USA” and the KG facts 1) USA -> contain -> New York and 2) US <- contain <- North America, how does GCR handle the query? Will both New York and North America both contained in the output of KG specialized LLM? The tokenization strategy may affect the GCR decoding process, maybe a word level strategy is more reasonable? Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your time and effort in reviewing our submission. Below, we address your specific concerns: ### R1. Handling Filtering Conditions in Queries GCR can solve this type of query by combining the power of both KG and powerful general LLM. GCR first constrains the reasoning process within the KG to obtain the relevant knowledge, such as the population of all the states in the US. Then, GCR leverages the general-purpose LLM to conduct inductive reasoning on the knowledge to solve queries that require numerical comparison. ### R2. Discrepancy Between F1 Score and Hit@1 In experiments, we select top-K paths generated by KG-specialized LLM for answer generation, which might lead to some missing answers, causing a discrepancy between F1 and Hit@1. The results in Fig. 4 show that the recall of the answers increases as the K and reduces the discrepancy. ### R3. Logical Coherence in KG Reasoning Paths Thanks for bringing up this interesting point. Due to the lack of ground truth and a great number of paths, we utilize the LLMs to evaluate the logical coherence and semantic meanings of the generated paths. The prompt and evaluation result are shown below: ``` As an advanced reasoning evaluator, your task is to analyze whether the following reasoning path presents a **logically coherent connection from the question (subject entity) to the answer (target entity)**. You will assess whether each step in the path is valid and necessary, and whether the overall reasoning supports the final answer in a grounded and justified manner. ### Instructions: 1. Focus on whether the reasoning path makes logical sense from the question to the answer. 2. Check whether each relation contributes meaningfully and validly to reaching the final answer. 3. Penalize paths that make unjustified jumps, overly general connections, or weak associations. ### Rating Scale: 5 - Excellent: Every step is logically valid and contributes clearly toward the answer. 4 - Good: Mostly coherent with minor assumptions or weak steps. 3 - Moderate: Some steps are unclear, general, or weak, but the general direction is acceptable. 2 - Poor: Contains major logical leaps or unclear connections. 1 - Very Poor: Illogical or invalid path from question to answer. ### Output: - Score: [1 to 5] - Explanation: [Brief explanation of the logical quality of the path from question to answer] ### Question: {question} ### Answer: {answer} ### Path: {path} ``` | Method | Score | | :---- | :---- | | GCR | 3.9 | The results show that GCR achieves an average 3.9 in evaluation score, which demonstrates the logical coherence of the generated paths. Moreover, the LLM-based evaluation can be further used for selecting meaningful paths for training. Some meaningful cases can be found in Table 5\. ### R4. Conflicts Between KG-Specialized LLM and KG Knowledge We want to clarify that this difference only happens when **no constraints are applied** (GCR w/o constraint in Table 5), where LLMs purely rely on their internal knowledge for reasoning, which leads to hallucinations. This can be addressed by applying the KG-Trie constraints during reasoning (GCR) to ensure faithful and transparent LLM reasoning, which aligns with the motivation of our method. ### R5. Soundness of BFS Shortest Paths in Training We acknowledge that the shortest paths found via BFS may not always be the most semantically meaningful reasoning paths. However, due to the lack of golden reasoning paths, it would be a great choice to unsupervisedly obtain the paths for training. We will enhance path selection by incorporating LLMs for judging the semantics (as discussed in R3) to ensure that paths adhere to the logical coherence in the KG’s relationships. ### R6. Necessity of LLM for Path Selection As discussed Sec. 4.1, LLMs conduct reasoning by decoding reasoning step-by-step, which purely relies on their internal knowledge and leads to hallucinations. GCR adapts this reasoning behavior to KGs and decodes reasoning paths that are valid on KGs. This well aligns with the reasoning paradigm of LLMs and is faster than other methods adopting LLMs for path selection (e.g., ToG in Table 2). ### R7. Edge Direction We consider the direction of the relations during KG-Trie construction. Therefore, only `US -> is_contained_by -> North America` will be generated. The inverse relations can be added during KG construction. ### R8. Word-level Tokenization We adopt the token-level tokenizer of the original LLMs to avoid retraining it on new KGs. The entity and relations in any KG can be tokenized into tokens for KG-Trie construction. This ensures the transferability of GCR to unseen KGs (Table 6\).
null
null
null
null
null
null
null
null
Near-optimal Regret Using Policy Optimization in Online MDPs with Aggregate Bandit Feedback
Accept (poster)
Summary: This paper considers the problem of learning Adversarial (Tabular) MDPs with aggregated bandit feedback, which means only the total loss (instead of per-round losses) incurred in an episode is revealed. Using PO w.r.t. newly-proposed U-functions on each state, * with known transitions, an $\tilde{\mathcal O}(H^2 \sqrt{SAK})$-regret algorithm is proposed, matching the lower bound up to logarithmic factors; * without transition information, an $\tilde{\mathcal O}(H^3 S \sqrt{AK})$ upper bound is derived; and * a lower bound of $\Omega(H^2 \sqrt{SAK})$ is given via the reduction to multi-task bandit problems. Claims And Evidence: Claims are supported by rigorous proofs. Methods And Evaluation Criteria: The notion of regret is aligned with previous works in the literature, and the comparison with them is fair. Theoretical Claims: I read the statements of all the lemmas and checked several main steps in the proof. Experimental Designs Or Analyses: N/A Supplementary Material: I read the statements of all the lemmas and checked several main steps in the proof. Relation To Broader Scientific Literature: I am sorry to say this, but as far as I can see, the main contributions seem to be really restricted to the specific setup of aggregated bandit feedback (detailed below). Essential References Not Discussed: The literature review is pretty complete, except for a tiny bit regarding the removal of dilated bonuses (see below). Other Strengths And Weaknesses: The results are good to have given the recent interest on MDPs with aggreggated bandit feedback in the literature. The observation of the U-function and the corresponding version of performance difference lemma is cool, which avoids the reduction to linear bandits by Cohen et al. (2021b). However, besides these, it looks like the remaining components like implicit exploration, confidence sets on transitions and occupancy measures, and PO analysis for AMDPs, are more-or-less standard. Therefore I feel it's really a borderline result and can be much more interesting if the authors can utilize more properties of the U-functions (unfortunately, as the authors discuss in the Conclusion part, the structure of U-functions is fragile when the state/action space become infinite). Other Comments Or Suggestions: The removal of dilated bonuses may sound hand-waving by only saying "$B_h^k (s,a)$ is computed using standard Bellman equations, which is simpler and more intuitive than the dilated version of Luo et al. (2021)." It'd be good to refer the readers to (Lancewicki et al., ICML'23) and roughly explain it's because this specific $b_h^k(s,a)=\mathcal O(1)$, which means the extra term $\eta\sum_k B_k^2$ in `Reg` can be directly controlled as $\mathcal O(\eta K)=\mathcal O(\sqrt K)$ instead of the original approach $\eta \sum_k B_k^2 = \mathcal O(\sum_k B_k)$. * Tal Lancewicki, Aviv Rosenberg, Dmitry Sotnikov. Delay-Adapted Policy Optimization and Improved Regret for Adversarial MDP with Delayed Bandit Feedback. ICML 2023. Questions For Authors: Aside from the innovation of U-function and corresponding performance difference lemma, are there other technical innovations that might be independent interest? As I mentioned, I feel the U-functions are nice and elegant, but are really restricted to the specific setup of aggregated bandit feedback (and do not generalize to function approximations since I am not sure whether it makes sense to assume the U-functions are linear in some $\phi(s,a)$'s, which is an analog of the linear-Q assumption in the literature). If there're any technical contribution that I missed, I'd be happy to re-evaluate this paper. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your constructive review. Below is our response to your comments and questions. > *Aside from the innovation of U-function and corresponding performance difference lemma, are there other technical innovations that might be independent interest?* Major contributions of our work: The key novelty lies in the introduction of the U-function, which is easy to estimate under aggregate bandit feedback, and the regret decomposition with respect to it. This contrasts with previous approaches that directly tried to estimate the loss function, inducing several challenges under aggregate bandit feedback such as controlling the estimation bias—this results in a relatively complex algorithm and technical analysis. We believe that the U-function is an interesting concept that elegantly circumvents previous technical challenges and results in an intuitive and simple technical analysis. We believe that this new concept could be insightful for future research on RL with aggregate bandit feedback. > *The removal of dilated bonuses may sound hand-waving... It'd be good to refer the readers to (Lancewicki et al., ICML'23)...* Thank you for your suggestion - we will add a reference to Lancewicki et al. (2023) and discuss the technical differences from Luo et al. (2021). --- Rebuttal Comment 1.1: Comment: Thank you for your response. I agree that the definition of U-function itself is interesting and overcomes disadvantages of previous approaches, making the aggregated bandit feedback as easy (or as hard) as the standard bandit feedback by enabling various standard tabular AMDP techniques to be applicable. Nevertheless, I am still a bit disappointed about the limitation of U-functions to tabular cases and the lack of new techniques that can be of broader interest. Therefore, I still maintain a borderline assessment.
Summary: This paper studies regret minimization in tabular MDPs with adversarial losses, fixed transition kernel, and aggregate/trajectory/full- bandit feedback, meaning that at the end of each episode, the algorithm only receives the total loss among all the $H$ visited state-action pairs (i.e., the entire trajectory), rather than the loss of each pair (aka semi-bandit feedback). This paper provides algorithms to achieve $O(H^2\\sqrt{SAK})$ known-transition upper bound, and $O(H^3S\\sqrt{AK})$ unknown-transition upper bound (ignoring log factors). This paper also shows a matching lower bound for the known-transitioin case, hence characterizing the minimax regret. The unknown-transition upper bound is the same as the best known one in the semi-bandit setting. Techinally, this is due to the Policy Optimization (PO) framework and a novel performance-difference-like regret decomposition w.r.t. the proposed "U function". Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: I check the proof sketchs in the main body. Look correct to me. Experimental Designs Or Analyses: n/a Supplementary Material: no Relation To Broader Scientific Literature: relevant audience include researchers study online learning, reinforcement learning, policy optimization Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths: The new regret decompostion is novel. This is also naturally used in the algorithm design (e.g., now we need to build loss estimate for the U function which is simple). I am impressed by this idea. Other Comments Or Suggestions: n/a Questions For Authors: The questions below are mainly for my curiosity and better understanding this paper: 1. Seems that the boundes are specific to the PO framework (and the regret decomoposition). Does this mean that it's unclear whether occupancy measure + FTRL/OMD can achieve the same bound (as they do not come with such regret decomposition)? 2. As long as we have this new regret decomposition, is the regret analysis pretty much the same as in Luo et al., 2021? If not, what are the key differences? (I don't view this as weakness, I just want to check my understanding). 3. With trajectory feedback, we do not even need the complicated dialated bonus as in Luo et al., 2021. Is it still because now the loss estimate is w.r.t U function (which is already doing some sort of global exploration/optimization)? 4. In the lower bound, why can not we utilize the unknown transition to prove a harder lower bound? What's the challenge? 5. Do the authors believe semi-bandit is statiscally as easy as full-bandit, or the upper bound in semi-bandit can be improved? Thanks in advance for sharing any thoughts. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for the positive review and the great questions. We would be happy to discuss any of the points below in the final version that the reviewer believes will improve the paper. **Q1:** Indeed, our bounds are specific to the PO framework. The only 'occupancy measure + FTRL/OMD' algorithm for aggregate bandit feedback (ABF) is by Cohen et al. (2021b), where it is still unclear how to improve them using the occupancy measure approach. Given the regret structure in terms of occupancy measures, it is natural to attempt estimating the loss function itself; however, handling estimation bias under unknown dynamics is quite challenging since the learner does not precisely know the played occupancy measure. Under known dynamics, it might be possible to achieve the optimal bound using 'occupancy measure + FTRL/OMD'. While the optimal bound for linear bandits over general convex sets is $d\sqrt{T}$, in some special cases, such as the simplex, it is possible to achieve tighter bounds. Therefore, there may be room to improve the bounds for the set of occupancy measures. This is an important and interesting open challenge for future research. **Q2:** Yes, the key novelty lies in the introduction of the U-function, which is easy to estimate under aggregate bandit feedback, and the regret decomposition with respect to it. **Q3:** No, in fact, one does not need the dilated component of Luo et al., 2021 (i.e., the $(1+\frac{1}{H})$ factor in the backup operator), even in the semi-bandit case. For prior evidence, see [1]. [1] Lancewicki, T., Rosenberg, A., & Sotnikov, D. "Delay-adapted policy optimization and improved regret for adversarial MDP with delayed bandit feedback." ICML 2023 **Q4:** That is a great question. The challenge in improving the lower bound under unknown dynamics is that ABF is not less informative about the dynamics, since the agent still observes the transitions. Thus, unlike in the semi-bandit case, under ABF it remains unclear whether unknown dynamics make the problem statistically harder. **Q5:** In the known dynamics case, full-bandit feedback is in fact statistically harder than semi-bandit feedback—the optimal bound in the semi-bandit case is $H \sqrt{SAK}$, and our lower bound for the full-bandit case is $H^2 \sqrt{SAK}$. As for the known dynamics case, it remains highly unclear, partly because the optimal bound in the semi-bandit case is also an open question. Under semi-bandit and unknown transitions, the best known upper bound is $H^2 S \sqrt{AK}$ (Jin et al., 2020), while the best known lower bound is $\Omega(H^{3/2} \sqrt{SAK})$ (Jin et al., 2018). If the latter is optimal under semi-bandit, then due to our lower bound, ABF is statistically harder. We also conjecture that the $H^2$ dependency in our lower bound is optimal and that the extra $H$ in our upper bound is an artifact of the PO method (as PO currently has this artifact in the semi-bandit case). If this conjecture is true, and if $H^2 S \sqrt{AK}$ is optimal under semi-bandit, then full-bandit would statistically be as easy as semi-bandit. --- Rebuttal Comment 1.1: Comment: I've read the response from the authors and also the communication between the authors and other reviewers. While currently the results are limited to tabular MDP and it's non-trivial to extend them to (linear) function approx. scenario, I still appreciate the progress made on the understanding of statistical limits in learning tabular MDPs, so I'd like to continue to support this work and recommend acceptance.
Summary: This paper studies online episodic MDPs with adversarial costs and aggregate bandit feedback. Under aggregate bandit feedback, the agent only observes the entire episode loss, making it less informative than the full information setting (the agent observes the full cost function), and the bandit/semi-bandit feedback setting (the agent observes the loss over the agent's trajectory). The paper explores both known and unknown transition function scenarios and introduces a policy optimization algorithm for each case. Using a new tool called the U-function, the proposed algorithms improve upon existing regret bounds in the literature for both settings. Claims And Evidence: All theoretical claims are followed by proofs in the appendix or in the main paper. Methods And Evaluation Criteria: As a theoretical paper, the proposed algorithms are well-suited to the problem. Since a key advantage of policy optimization algorithms is their closed-form solutions, implementation should be relatively straightforward. Therefore, despite the paper’s theoretical focus, it would be interesting to include some experimental demonstrations of the U-functions in practice. Theoretical Claims: Proofs for theoretical claims seem correct. Experimental Designs Or Analyses: not applicable. Supplementary Material: Yes, I reviewed the main ideas of the proofs in the Appendix. Relation To Broader Scientific Literature: The paper improves the current state-of-the-art bound for online MDPs with adversarial losses and aggregate bandit feedback when the dynamics are known and in this case they are the first to establish the near-optimal regret bound, and in the unknown dynamics setting they improve the terms related to the horizon, the state space size, and the action space size when compared to the previous approaches from the literature. The main idea of the paper, the U-functions, seem new. The U-function at a given state-action pair computes the expect cost on the entire trajectory given that the agent visit this state-action pair. On the other hand, once the U-function is introduced, the rest of the paper (the Policy Optimization algorithm, the dilated bonus equation, and the regret analysis) follow from the work of Luo et al. (2021). The difference being that instead of considering an estimation of the Q-function, the paper considers an estimation of the U-function. Essential References Not Discussed: All essential references seems to be discussed. Other Strengths And Weaknesses: This is an interesting paper for the following reasons: - The paper is well-written and organized. - The concept of the U-functions is new and is a simple and elegant way of attacking the bandit aggregate feedback problem. - While there are no additional technical novelties beyond the introduction of the U-function, demonstrating that the policy optimization algorithms from Luo et al. (2021), originally designed for bandit feedback, can be adapted, thanks to the U-functions, to the more challenging aggregate bandit feedback setting while maintaining comparable regret bounds is a significant contribution. It improves on previous results, and could open the way for new contributions on works of RL with trajectory feedback. Other Comments Or Suggestions: No other comments. Questions For Authors: I checked the proof for the case with known dynamics and it sounds correct, but I wonder if a similar result could also be obtained without the exploration bonuses (as here the dynamics are known), using similar results as the Appendix B.3 from Jin et al 2020. *Jin et al., Learning adversarial markov decision processes with bandit feedback and unknwon transition, 2020.* Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive and constructive review. Below is our response to your comments and questions. > *despite the paper’s theoretical focus, it would be interesting to include some experimental demonstrations of the U-functions in practice.* Our work is theoretically focused. Previous theoretical work has taken a more direct approach and faced challenges such as directly estimating the loss function and controlling the estimation bias in the regret bound. In our work, we introduce the concept of U-functions, which allows us to circumvent these challenges and handle aggregate bandit feedback much more elegantly. We hope this new concept will be insightful for future research, particularly for empirical studies and more practical applications. However, we believe that tabular MDPs would not be a good framework for demonstrating the practical applications of the U-function. Instead, an extensive empirical study that incorporates the U-function into a practical deep RL method, tested under a well-chosen set of benchmarks with large state spaces, would be more appropriate. We believe that this is an important area for future research and will include it in the future work section. > *…for the case with known dynamics… I wonder if a similar result could also be obtained without the exploration bonuses (as here the dynamics are known), using similar results as the Appendix B.3 from Jin et al 2020.* Jin et al. 2020 employs an occupancy-measure-based algorithm where, indeed, one does not need to incorporate bonuses in the known dynamics case. On the other hand, in the Policy Optimization method, it is not clear how to achieve $\sqrt{K}$ bounds without an exploration bonus. As mentioned in the paragraph starting at line 240, the purpose of the bonus in the known dynamics case is to address the distribution mismatch between $\mu^\star(s)$ and $\mu^k(s)$ that appears in the regret bound (e.g., line 294, right column). This distribution mismatch is rather orthogonal from the additional challenges in the unknown dynamics case, and stems mainly from the fact that the value difference lemma breaks the regret over states with weights $\mu^\star(s)$, and that we run the OMD locally in each state (unlike the method in Jin et al. 2020, which is the framework that Cohen et al. (2021b) builds upon).
Summary: The paper studies finite-horizon MDPs with adversarial losses under the aggregate bandit feedback model. In the known-dynamics case, the paper achieves the first optimal regret bound, while in the case of unknown dynamics it significantly improves the previous best known result. Claims And Evidence: Yes, the claims made in the paper are supported by clear and convincing evidence and proofs. Methods And Evaluation Criteria: N/A. The paper is on the theory of reinforcement learning. Theoretical Claims: Yes. I checked many proofs of the paper, especially Appendix B. I have some minor points/questions to the authors: - I think that the bonus b(s) should not have $\pi$ in the denominator (see, for example, the analysis in line 684). - Line 695: The variance term must contain $\mathbb{E}[Y_k]$, since the martingale difference is $Y_k - \mathbb{E}[Y_k]$. However, I believe that such minor errors across the analysis do not change the complexity results of the paper. Experimental Designs Or Analyses: N.A. Supplementary Material: Yes Relation To Broader Scientific Literature: I believe this paper is of great importance, as it solves an important problem of RL theory. More specifically, the paper deals with finite-horizon, tabular, adversarial MDPs under the relatively unexplored, but very interesting, full bandit setting. - The paper proposes the first optimal algorithm under known dynamics, matching the lower bound (proved by the authors) and the regret of [1] (semi-bandit). - The paper significantly improves the previous best regret upper bound of [2] under unknown dynamics. - From a technical standpoint, the authors introduce a novel RL objective, namely the U-function, which facilitates the proposed algorithms and the analysis for the full bandit setting. Based on the U-function, using well-established techniques, the paper achieves good and interesting results in this problem. [1] Zimin, A. and Neu, G. Online learning in episodic marko- vian decision processes by relative entropy policy search. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pp. 1583–1591, 2013. [2] Cohen, A., Kaplan, H., Koren, T., and Mansour, Y. Online markov decision processes with aggregate bandit feedback. In Conference on Learning Theory, pp. 1301–1329. PMLR, 2021b. Essential References Not Discussed: The paper covers adequately the related work. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: - (Line 326) Can the authors provide indicative works where the Bernstein sets are used ? - Can the authors provide the time complexity of the proposed algorithms ? Questions For Authors: - In the unknown dynamics setting, how does Algorithm 2 compute $\mu$? What is the time complexity per episode for these steps? - Have the authors tried a model-free approach? (that is, without estimating the transition probabilities) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive and constructive review. Below is our response to your comments and questions. > *I think that the bonus b(s) should not have π in the denominator (see, for example, the analysis in line 684)* Note that in line 684 the denominator has $\mu_h^k (s,a)$ which is by definition $\mu_h^k (s) \pi_h^k (a \mid s)$ (as in the bonus). > *Line 695: The variance term must contain E[Yk], since the martingale difference is Yk−E[Yk]. However, I believe that such minor errors across the analysis do not change the complexity results of the paper.* Thank you for spotting that. This is indeed not a proper use of Lemma E.2 but has a simple fix: for the martingale difference, $|Y_k - \mathbb{E}[Y_k]| \leq \max(Y_k, \mathbb{E}[Y_k]) \leq \frac{H^2}{\gamma}$, and the variance is bounded by the second moment: $\mathbb{E}[(|Y_k - \mathbb{E}[Y_k]|)^2] \leq \mathbb{E}[Y_k^2]$. We will correct this in the final version in line 695 and also in line 990 (which has the same minor error). > *(Line 326) Can the authors provide indicative works where the Bernstein sets are used?* Yes, in the context of Policy Optimization see for example Shani et al. (2020); Luo et al. (2021). But it is also widely used in other types of algorithms such as in occupancy-measure-based algorithms (Jin et al., 2020) and FTPL (Dai et al., 2020). We will reference those in line 326. Dai, Yan, Haipeng Luo, and Liyu Chen. "Follow-the-perturbed-leader for adversarial markov decision processes with bandit feedback." > *Can the authors provide the time complexity of the proposed algorithms?* For the known dynamics case, the state occupancy measure can be computed using the following recursive formula: $\mu_{h+1}^{\pi}(s') = \sum_{s,a} \mu_{h}^{\pi}(s) \pi_{h}(a \mid s) P(s' \mid s,a)$, so that the full occupancy measure can be computed in $O(H S^2 A)$. The bonus $B$ is calculated via backward dynamic programming also in $O(H S^2 A)$, and the policy computation is done in $O(H S A)$. Thus, the total complexity per iteration is $O(H S^2 A)$. For the unknown dynamics case, the upper/lower confidence occupancy measure computation can be done using Algorithm 3 in (Jin et al., 2020), with a complexity of $O(H S^2 A)$ per state. The same can be done for the computation of the bonus $\hat{B}$. We will include these details in the final version. > *In the unknown dynamics setting, how does Algorithm 2 compute μ? What is the time complexity per episode for these steps?* See above. > *Have the authors tried a model-free approach?* To the best of our knowledge, all existing regret minimization algorithms for MDPs with non-stochastic losses are model-based, even in the semi-bandit case. Achieving sub-linear regret under non-stochastic losses using a model-free algorithm is a very interesting future direction, already in the simpler context of semi-bandit feedback.
null
null
null
null
null
null
Graph Generative Pre-trained Transformer
Accept (poster)
Summary: This paper introduces the graph generative pre-trained transformer (G2PT) for graph generation using auto-regressive transformers. The method introduces a sequence-based graph representation approach, which fits well to transformer architectures originally developed for NLP. The paper explores fine-tuning methods for downstream applications, e.g., goal-oriented graph generation and property prediction. Experiments demonstrate that G2PT achieves SOTA performance across several benchmarks, including generic graphs and molecular graphs. Claims And Evidence: Claims made in the submission look good to me. Methods And Evaluation Criteria: Yes, they make sense to me. Theoretical Claims: The major theoretical claim is that maximizing the sequence likelihood serves as maximizing a lower bound on the true graph likelihood, which is pretty clear to me. Experimental Designs Or Analyses: 1. I would suggest to add more detailed introduction/explanation for Table 2. From the title and text, it's not straightforward to know what tasks are performed there. 2. There are a lot of zeros in Table 2. Is it possible that the model overfits? Supplementary Material: Supplementary Material looks good to me. Relation To Broader Scientific Literature: The graph generation task targeted by the paper is a big domain, including a few classic works such as GraphRNN, GRAN. Tokenizing graph is well motivated given the success of LLMs. Essential References Not Discussed: I don't see essential references not discussed by the paper. Other Strengths And Weaknesses: The paper is easy to follow. The method proposed by the paper is straightforward yet effective, and complexity could be reduced by sequential representation compared to using adjacency matrix. The fine-tuning approaches (e.g., rejection sampling, reinforcement learning) are thoroughly explored and clearly demonstrated through experiments on goal-oriented molecule generation. Other Comments Or Suggestions: NA Questions For Authors: Please see my comments above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive suggestion! We address the concerns below. --- **Q1**. _I would suggest adding a more detailed introduction/explanation for Table 2. From the title and text, it's not straightforward to know what tasks are performed there._ **A1**. Thanks for pointing this out! Since the experiment settings are standard and due to the page limit, we move most of the experiment details to the appendix. We will provide add a more detailed introduction for Table 2 in the next version. --- **Q2**. _​​There are a lot of zeros in Table 2. Is it possible that the model overfits?_ **A2**. We report the V.U.N metric in our table. Specifically, this is computed by counting the percentage of generated graphs being valid, unique and novel. The novelty metric indicates whether the samples are different from the training graphs. Therefore, the model performs well not just by memorizing the training graph. Moreover, they are not exact zeros but we round it to 4 decimals (potentially a value of 1e-5 during evaluation). The values are so small may be due to the nature of the graph statistics or simply because they are very easy to be captured by the model. --- We hope that we have addressed your concerns, thanks!
Summary: This paper proposes Graph Generative Pre-trained Transformer (G2PT) as a novel approach to molecular graph generation models. While conventional graph generation models are primarily adjacency matrix-based, this method treats node and edge lists as token sequences and employs an autoregressive Transformer (Transformer Decoder) for efficient learning. To extend G2PT as a general-purpose foundation model, it explores fine-tuning techniques for two downstream tasks: goal-oriented generation and graph property prediction. Computational experiments demonstrate that G2PT achieves state-of-the-art (SOTA) performance in molecular generation (QM9, MOSES, GuacaMol), general graph generation (Planar, Tree, Lobster, SBM), and downstream tasks (MoleculeNet). Claims And Evidence: The paper presents experimental results demonstrating that the proposed token sequence-based graph generation approach is more efficient than existing adjacency matrix-based methods. It also confirms that G2PT achieves performance comparable to or exceeding state-of-the-art (SOTA) models based on generation quality metrics such as MMD, Validity, Novelty, and FCD. Additionally, for goal-oriented generation, fine-tuning G2PT using rejection sampling (RFT) and PPO (reinforcement learning) effectively enhances target graph properties such as QED, SA, and GSK3β. On the other hand, the way of representing graphs as token sequences is a very straightforward and cannot be said to build upon previous research. The extent to which "learning as a token sequence" and "learning through autoregressive next-token prediction" individually contribute remains unclear, and the evidence for each aspect appears somewhat insufficient. At the very least, there should be a discussion on how this method relates to existing token-based approaches—such as those highlighted in well-cited reviews like https://arxiv.org/abs/2302.04181—or, if specializing in molecular graphs, a comparison with simpler methods that feed traditional symbol-sequence representations of molecules like SMILES or SELFIES to Transformers. Methods And Evaluation Criteria: While traditional graph generation models have primarily been adjacency matrix-based, this approach treats node and edge lists as token sequences and learns them efficiently using an autoregressive Transformer (Transformer Decoder). The paper notes that while early research on graph generation started with naive autoregressive methods, many recent high-performance models, such as those based on discrete diffusion, generate adjacency matrices. This makes it technically interesting to examine whether the autoregressive approach remains effective. Additionally, the study evaluates the method using two tasks: goal-oriented graph generation and property prediction. The evaluation also extends beyond molecular datasets to include generic datasets. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental datasets and tasks are well-balanced, incorporating molecular generation benchmarks such as MOSES and GuacaMol, as well as generic datasets. However, the QM9 dataset contains molecular data with 3D coordinates, which may not be the best fit for the proposed method. A more careful consideration of this choice may be necessary. Supplementary Material: No Relation To Broader Scientific Literature: While traditional graph generation models have primarily been adjacency matrix-based, this approach treats node and edge lists as token sequences and learns them efficiently using an autoregressive Transformer (Transformer Decoder). The paper notes that while early research on graph generation started with naive autoregressive methods, many recent high-performance models, such as those based on discrete diffusion, generate adjacency matrices. This makes it technically interesting to examine whether the autoregressive approach remains effective. Essential References Not Discussed: There should be a discussion on how this method relates to existing token-based approaches—such as those highlighted in well-cited reviews like https://arxiv.org/abs/2302.04181—or, if specializing in molecular graphs, a comparison with simpler methods that just feed widely-established symbol-sequence representations of molecules like SMILES or SELFIES to Transformers. Attending to Graph Transformers https://arxiv.org/abs/2302.04181 SELFIES and the future of molecular string representations https://doi.org/10.1016/j.patter.2022.100588 Transformer-based models for chemical SMILES representation: A comprehensive literature review https://doi.org/10.1016/j.heliyon.2024.e39038 While this paper also evaluates generic tasks, its main focus is on molecular graph generation. Given that SMILES (and SELFIES) representations have a long history and are widely used for molecular representation, a natural question arises: why not simply input these representations into a Transformer? In fact, similar proposals have been repeatedly explored in the machine learning field, and discussions from the following paper and its peer reviews could provide useful insights. (**This doesn't mean the paper should cite the following papers; Just for information to make sure the main discussion points along this kind of research.**) SmilesFormer: Language Model for Molecular Design https://openreview.net/forum?id=VBQZkYu22G SELFIES-TED : A Robust Transformer Model for Molecular Representation using SELFIES https://openreview.net/forum?id=uPj9oBH80V Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. We believe the suggestion are very constructive in improving the quality of our draft, below we address the raised concern. --- **Q1**. _On the other hand, the way of representing graphs as token sequences is very straightforward and cannot be said to build upon previous research. The extent to which "learning as a token sequence" and "learning through autoregressive next-token prediction" individually contribute remains unclear, and the evidence for each aspect appears somewhat insufficient._ **A1**. Yes, the way we represent graphs as tokens is both straightforward and novel as it is not built upon previous research. This is because previous token-based approaches have focused on **learning representations for graph/node/edge**, while ours focus on **generating graphs**. Regarding the contributions of each component, obtaining the token sequence is a necessary first step for applying next-token prediction. As a result, it is challenging to separate their individual impacts. Moreover, none of the earlier token-based methods are suitable for next-token prediction learning, which is why we developed this new tokenization approach. We will further elaborate on this in our response to your next question. --- **Q2**. _At the very least, there should be a discussion on how this method relates to existing token-based approaches - such as those highlighted in well-cited reviews like https://arxiv.org/abs/2302.04181._ **A2**. Thank you for sharing the comprehensive survey covering previous graph-based transformer methods. We have thoroughly reviewed these approaches. Below we summarize what distinguish our method from them. - **Task**: G2PT is designed specifically for graph generation, whereas the token-based approaches discussed in the review primarily target graph representation learning. This fundamental difference necessitates distinct tokenization strategies for our task. - **Tokenization**: Correspondingly, G2PT’s tokenization is **invertible** between graph and token sequence. That said, we can obtain the graph by de-tokenizing the graph. While the previous token-based approaches only address how to represent graphs into tokens (node / node+edge / subgraph), translating token sequence back to graph remains unclear. GPT-like models have demonstrated tremendous success across various domains. However, attempts on utilizing them in graph generation remain inadequate. We believe the barrier lies in how to efficiently tokenize and de-tokenize graphs, which our work directly addresses. --- **Q3**. _or, if specializing in molecular graphs, a comparison with simpler methods that feed traditional symbol-sequence representations of molecules like SMILES or SELFIES to Transformers._ **A3**. G2PT is more generic in representing graphs, making it more flexible thus more suitable for various molecule-related applications compared to traditional symbol-sequence. Such applications include **constrained generation**, **molecule in-painting**, **retrosynthesis**, etc. (https://arxiv.org/pdf/2502.09571, https://arxiv.org/abs/2308.16212) In contrast, since symbol-sequences like SMILES/SELFIES are canonical for graphs, modifying any node or edge on the graphs will lead to a transformative change on the sequence's representation. Methods that operate on SMILES/SELFIES need to employ a seq-to-seq approach to generate a new molecule from scratch given the condition one. G2PT's principle of tokenizing a graph is more general and can be further extended beyond the one we define in the paper. By introducing addition/deletion/replace actions, alteration can be easily performed on the original molecule via defining a sequence of action. Such action trajectory well aligns with the process of lead optimization. And we believe such agentic paradigm would be more universal and scalable. --- **Q4**. _The experimental datasets and tasks are well-balanced, incorporating molecular generation benchmarks such as MOSES and GuacaMol, as well as generic datasets. However, the QM9 dataset contains molecular data with 3D coordinates, which may not be the best fit for the proposed method. A more careful consideration of this choice may be necessary._ **A4**. We agree that QM9 contains 3D information. The reason we chose QM9 is to follow prior works (DiGress, DeFoG, Cometh) to provide a more comprehensive comparison. --- We hope that we have addressed your concern. Let us know if you have any following-up questions. --- Rebuttal Comment 1.1: Comment: Thank you for your comment. The additional information has partially addressed my concerns, and I now have a better understanding of the paper's contributions. At the same time, because the proposed approach is quite simple, I also feel that the paper should have at least included these detailed discussions and/or experimental comparisons with existing tokenization methods and traditional linear molecular representations like SMILES and SELFIES. If the proposed tokenization method is specifically designed for generation (rather than molecular representation learning itself), providing evidence to support that would make it both interesting and valuable. I look forward to seeing future improvements. --- Reply to Comment 1.1.1: Comment: Thanks for your response! For sure that we are happy to further address your concern. --- **Q1.** _I also feel that the paper should have at least included these detailed discussions and/or experimental comparisons with existing tokenization methods and traditional linear molecular representations like SMILES and SELFIES_ **A1.** We consider two baselines: **GEEL** and **LigGPT**. Here GEEL proposes another graph tokenization approach and LigGPT is based on SMILES sequences. We compare G2PT against these two baselines on molecular datasets. Note that due to the limited time, we copy the performance of LigGPT from its paper. For GEEL, we report the reproduced results, which we've done earlier per Reviewer TGsg's request. - Paper links: GEEL: https://arxiv.org/pdf/2312.02230 LigGPT: https://chemrxiv.org/engage/chemrxiv/article-details/60c7588e469df48597f456ae - Results on MOSES ||Validity |Unique |Novelty |Filters |FCD |SNN |Scaf | |-|-|-|-|-|-|-|-| |GEEL |92.1 |100 |81.1 |97.5 |1.28 |0.52 |**3.6** | |LigGPT |90.0 |99.9|**94.1** |- |- |- |- | |G2PT |**96.4** |**100** |86.0 |**98.3** |**0.97** |**0.55** |3.3 | - Results on GuacaMol ||Validity|Unique|Novelty|KL Div.|FCD| |-|-|-|-|-|-| |GEEL|88.2|98.2|89.1|93.1|71.5| |LigGPT|**98.6**|99.8|**100**|- |- | |G2PT|94.6|**100**|99.5|**96.0**|**93.4**| G2PT achieves superior result compared to the two baselines. **Note that LigGPT performs temperature tuning (1.6 for MOSES and 0.9 for GuacaMol) to maximize performance while ours use the default temperature (1.0) for sampling.** We hope the comparison has addressed your concerns. We will include those results along with the discussion in previous response in our updated draft. --- **Q2.** _If the proposed tokenization method is specifically designed for generation (rather than molecular representation learning itself), providing evidence to support that would make it both interesting and valuable_ **A2.** Thank you for the insightful question. Indeed we develop this tokenization approach for generation task while previous approaches mostly focus on representation learning. However, whether the G2PT's tokenization could be used for representation learning remains unexplored. While we explore on a downstream graph prediction task in the submission, we believe there maybe a better paradigm (training objective) for G2PT tokenization. One of the direction we are currently exploring is to apply encoder-based transformer for mask modeling. This is implicitly performing node reconstruction and edge prediction task when trying to recover token from mask. We are still working on introducing informative learning tasks in graph-level. And since such work deviates the problem we targeted to address in the submission, we will leave such work in the future. --- We hope our follow-up responses addressed your remaining concerns. Thank you again for taking the time to review our submission and rebuttal.
Summary: Authors introduce a new way to represent graph as sequence of tokens, that contains both node definitions and edge definitions. They use this representation and standard transformer architecture trained on next-token prediction task to generate new graphs. The method is competetive to SOTA diffusion and non-autoregressive graph generative methods, which is not the case for older autoregressive approaches. Authors further expand their methods by showing its utility for downstream molecule property prediction tasks as well as RL finetuning for goal oriented generation. ## update after rebuttal The authors adressed my main concerns raised, and overall I think this is a good paper. Thus I recommend acceptance Claims And Evidence: I find all claims to be well validated experimentally using established benchmarks. Methods And Evaluation Criteria: Authors use a wide range of well established graph generation benchmarks that are good for testing graph generation approaches. Theoretical Claims: The theoretical claims are quite standard and rely on well known methods and thus are sound. Experimental Designs Or Analyses: All experiments follow standard practices and use standard datasets. Supplementary Material: I read the full appendix. Relation To Broader Scientific Literature: The paper revisits autoregressive graph generation and shows that with modern neural netrowks and smarter data representation the autoregressive methods can be competetive to the current SOTA diffusion-based graph generative methods. They do skip one very relevant related work as per below. Essential References Not Discussed: Authors failed to discuss and compare to probably the most comparable work in the literature to theirs: https://arxiv.org/pdf/2312.02230 which also proposes a new way to represent graphs for autoregresive generation. Other Strengths And Weaknesses: No other weaknesses. The method is quite straight forward, the validation is quite extensive and the results look good and are in line with other SOTA approaches, which is good enough for a method resurecting a much older way of doing things (autoregressive generation). Other Comments Or Suggestions: A small note is that in lines 327-328 authors quote DiGress as source for valid, unique and novel samples (V.U.N.) metrics, while they were originally introduced in SPECTRE (https://arxiv.org/abs/2204.01613). Questions For Authors: I'd like to see a comparison to https://arxiv.org/pdf/2312.02230. Can the authors also expand on tie-breaking in Algorithm 1? How is the problem solved when multiple nodes have the same degree? Are ties broken randomly and ordering is different in each epoch for the same sample or are they broken in a fixed maner based on original graph IDs. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive review, below we address the raised question/comment. --- **Q1**. _Discussion with GEEL (https://arxiv.org/pdf/2312.02230) and comparison_ **A1**. We first provide a discussion with GEEL then provide the experimental result of G2PT and GEEL on 6 graph datasets. - Discussion: G2PT and GEEL both transform graphs into edge lists. Typically, an edge list is viewed as a sequence of node pairs, such as $[(s_1, t_1), (s_2, t_2), …, (s_m, t_m)]$. The efficiency of this representation hinges on the construction of the vocabulary. A basic method treats each pair as a distinct “word,” leading to a vocabulary size on the order of $O(N^2)$ for a graph with N nodes, which becomes unscalable and directly depends on the graph size. To address this, GEEL reduces the vocabulary size by drawing inspiration from the concept of graph bandwidth B. This idea is based on the observation that, after an appropriate node permutation, only the entries near the diagonal of the adjacency matrix are nonzero, thereby shrinking the vocabulary from $O(N^2)$ to $O(B^2)$. In contrast to GEEL, G2PT represents each node pair using two tokens rather than a single token. This multi-token approach provides better flexibility and avoids any assumptions about the graph’s structure, reducing the vocabulary size significantly to $O(N)$. It is also important to note that for any graph, a trivial lower bound for the graph bandwidth is given by $bw(G) ≥ \Delta/2$, where $\Delta# is the maximum degree of the graph. Thus, while GEEL’s vocabulary size may vary depending on the graph’s pattern, G2PT’s approach remains pattern-agnostic. - Result Comparison: We compare G2PT with GEEL on generic graphs (Planar, Tree, Lobster, SBM) and molecular graphs (MOSES, GuacaMol). _Planar_ | | Deg | Clus | Orbit | Spec | Wavelet | V.U.N. | |------|--------|--------|--------|--------|---------|--------| | GEEL | **1e-3** | 1e-2 | 1e-3 | - | - | <27.5 | | G2PT | 1.8e-3 | **4.7e-3** | **0.00** | 8.1e-3 | 5.1e-3 | **100** | _Tree_ | | Deg | Clus | Orbit | Spec | Wavelet | V.U.N. | |------|--------|--------|--------|--------|---------|--------| | GEEL | **1.5e-3** | **0.00** | 2e-4 | 1.5e-2 | **4.6e-3** | 90 | | G2PT | 4.2e-3 | **0.00** | **1e-4** | **7.3e-3** | 5.7e-3 | **99** | _Lobster_ | | Deg | Clus | Orbit | Spec | Wavelet | V.U.N. | |------|--------|--------|--------|--------|---------|--------| | GEEL | 2e-3 | **0.00** | 1e-3 | - | - | <72.7 | | G2PT | **1e-3** | **0.00** | **0.00** | 4e-3 | 1e-2 | **100** | _SBM_ | | Deg | Clus | Orbit | Spec | Wavelet | V.U.N. | |------|--------|--------|--------|--------|---------|--------| | GEEL | 2.5e-2 | **3e-3** |2.6e-2 | - | - | <42.5 | | G2PT | **4.2e-3** | 5.3e-3 | **3e-4** | 6.1e-3 | 6.9e-3 | **100** | _MOSES_ | | Validity | Unique | Novelty | Filters | FCD | SNN | Scaf | |------|--------|--------|--------|--------|---------|--------|--------| | GEEL | 92.1 | **100** | 81.1 | 97.5 | 1.28 | 0.52 | **3.6** | | G2PT | **96.4** | **100** | **86.0** | **98.3** | **0.97** | **0.55** | 3.3 | _GuacaMol_ | | Validity | Unique | Novelty | KL Div. | FCD | |------|--------|--------|--------|--------|---------| | GEEL | 88.2 | 98.2 | 89.1 | 93.1 | 71.5 | | G2PT | **94.6** | **100** | **99.5** | **96.0** | **93.4** | We will include the discussion as well as the experiment result in our next version. --- **Q2**. _V.U.N reference correction_ **A2**. Thanks for correcting this! We will update the reference in the next version. --- **Q3**. _Can the authors also expand on tie-breaking in Algorithm 1? How is the problem solved when multiple nodes have the same degree? Are ties broken randomly and ordering is different in each epoch for the same sample or are they broken in a fixed manner based on original graph IDs._ **A3**. Excellent question! Yes we break the tie randomly and will obtain a different order every epoch. We observe that using more than 10 orders for every graph leads to better generalization and performance (similar to data augmentation). We kindly refer to the reviewer in Figure 3 for how using more orders affects the overall validity of molecular graphs. --- Let us know whether we address your concerns, thanks!
null
null
null
null
null
null
null
null
Revisiting Unbiased Implicit Variational Inference
Accept (poster)
Summary: In this work they propose importance sampling estimation of the score function needed for minimising the KL divergence between q_z and p_z in SIVI. To do this they use a CNF proposal. They compare their methods (one which uses the importance sampling estimator and one which doesn't) against a kernel stein discrepancy based method (KSIVI) and a particle approximation to a Euclidean-Wasserstein gradient flow (PVI). They show that their methods perform comparably to PVI and KSIVI on a Bayesian Logistic Regression task, and that their importance sampling based method performs favourably (both in terms of approximation quality and clock-time) on a Conditioned Diffusion Process task. ## update after rebuttal I maintain my recommendation. Claims And Evidence: I believe the authors show that their proposed method works and is comparable to other existing methods, while saving on computation thanks to the replacement of the costly MCMC step with importance sampling. Methods And Evaluation Criteria: Yes, the datasets and models evaluated on seem adequate. Theoretical Claims: I looked over the proof in the main text which seemed correct to me. I also looked at some of the proofs of consistency in the appendix and could see no issues. Experimental Designs Or Analyses: As previously mentioned the experiments seemed sensible. Supplementary Material: I looked at the proofs in the appendix. Relation To Broader Scientific Literature: The work builds on Titsias & Ruiz (2019) which introduces Unbiased Implicit Variational Inference, and in particular includes the path gradient estimator which is needed to make their importance weighted estimator of the score function work. This work also relates to Yin & Zhou (2018) in that they both use a semi-implicit distribution as a proposal. Essential References Not Discussed: Not that I'm aware of. Other Strengths And Weaknesses: I thought this was a well written paper, using the background section to both cover the state of the field but also motivate the development of the presented method was a nice touch. Section 3.2 was interesting and I appreciated that some thought was given to efficient implementation. In the results I would have liked to have seen iterations per second for all the methods used in the final example. Other Comments Or Suggestions: "principle manner" line 029 From line 55 to 62 in the second column, I wonder if Andrade (2024) might not be a better reference as they have a theorem aimed at this very question, whereas VI: A review for statisticians seems to suggest that underestimation of the variance is "a consequence of its objective function"? In algorithm 2 it isn't clear what theta is for (I'm guessing the parameters of tau). Questions For Authors: On line 250 in column 2: "This alternating training is possible since sIS,k(z) is a consistent estimator of the score gradient for any τε|z with supp(qε|z) ⊂ supp(τε|z)." Could you explain this further? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your constructive feedback. We have made the following revisions in response to your comments: 1. **Computational Cost Comparison**: Following your suggestion to investigate computational costs, we have now performed comparisons based on the Conditioned Diffusion Process task for all methods. To facilitate a fair comparison, we use the total iterations of KSIVI (outer loop iterations × inner loop iterations) instead of just the outer iterations as done in Sec 5.3. This comparison highlights that AISIVI learns much faster per iteration than KSIVI (as shown in the [table](https://figshare.com/s/f45697639e8908df84dd?file=53348411)), but also emphasizes the extraordinarily fast implementation of KSIVI when comparing run times. Overall, the comparison confirms that only the gold standard KSIVI and our AISIVI reach log marginal likelihood values > 70,000, while all other methods notably lag behind. This result aligns with the visual comparison in [Fig. 4](https://figshare.com/s/f45697639e8908df84dd?file=53348420) and demonstrates that it is possible to achieve such performance without relying on a kernel-based approach. 2. **Explanation for Consistency**: Regarding your question > On line 250 in column 2: "This alternating training is possible since sIS,k(z) is a consistent estimator of the score gradient for any τε|z with supp(qε|z) ⊂ supp(τε|z)." about the consistency of alternating training: This statement means that as long as K (the number of importance samples) is large enough, we can draw from any proposal distribution that satisfies the support assumption. Since the score gradient estimator ($s_{\text{IS}}$) is consistent, it converges almost surely. This allows us to keep the proposal distribution fixed while updating the SIVI model, and update the flow afterwards with our unbiased forward KL gradient, which moves the current CNF closer to the optimal proposal distribution (the target) for the new SIVI model. This enables convergence to the global solution. We will make this more clear in an updated paper version and thank the reviewer for this question. 3. **Typo and Reference Update**: We have corrected the typo and greatly appreciate the reference to Andrade (2024). We will replace the current citation with Andrade (2024), as it provides a more relevant theorem on variance underestimation. 4. **Algorithm Clarification**: We updated the algorithm description to make it clear that θ refers to the parameters of τ (the normalizing flow). This change should clarify the notation and its role in the model. We hope these revisions address your concerns. Thank you again for your detailed feedback, which has been incredibly helpful in improving the clarity and rigor of our work. --- Rebuttal Comment 1.1: Comment: Thank you for responding to my concerns. In light of the additional experiments and commitment to clarifications in the manuscript, as well as addressing the concerns of the other reviewers I will maintain my recommendation.
Summary: This paper proposes a new method to reduce the bias of semi-implicit VI (SIVI). The key idea is to estimate the problematic term $\nabla_z \log q(z) = \nabla_z \log E_{\epsilon} [q(z|\epsilon) ]$ using importance sampling, where the proposal distribution is a normalizing flow. The normalizing flow is learned to match the posterior distribution $q(\epsilon|z)$. If learned perfectly, it completely debiases the estimator. The authors propose a neat way for how the IS can be summed efficiently for large batches. Several experiments on different inference settings have been performed where the papers that the proposed method can compete with SOTA inference methods. Claims And Evidence: Yes Methods And Evaluation Criteria: The evaluation Criteria makes sense Theoretical Claims: I only checked 3.1 which to the best of my understanding, is correct. Experimental Designs Or Analyses: The experimental design makes sense Supplementary Material: No Relation To Broader Scientific Literature: I must admit I am not very familiar with the literature in this regard. I have listed a few papers that seem relevant which are not cited here. Essential References Not Discussed: [1] Ma, Chao, Yingzhen Li, and José Miguel Hernández-Lobato. "Variational implicit processes." International Conference on Machine Learning. PMLR, 2019. [2] Yu, Longlin, et al. "Hierarchical semi-implicit variational inference with application to diffusion model acceleration." Advances in Neural Information Processing Systems 36 (2023): 49603-49627. [3] Shi, Jiaxin, Shengyang Sun, and Jun Zhu. "Kernel implicit variational inference." arXiv preprint arXiv:1705.10119 (2017). [4] Zimmermann, Heiko, et al. "Nested variational inference." Advances in Neural Information Processing Systems 34 (2021): 20423-20435. Other Strengths And Weaknesses: ***Strengths* - The paper reads very well. The motivation of the paper makes sense and it offers a nice introduction to SIVI and the problems with it. - While the proposed approach might be simple, it makes a lota of sense and is a perfectly valid contribution for ICML. - The experiments are very systematic and well designed, on par with many other VI papers. **Weaknesses** - I must say the results seem abit underwhelming. One issue is that ofcourse most of the results figures (2,3,4) are difficult to read and judge, in the sense that is diffcult to compare the performance of a method to the ground truth as well as other methods. There are no tables with the exception of the toy experiment. It looks to me that everything performs roughly the same, while KSVI and PVI perform a bit better. I understand that beating SOTA is not a requirement but it should be explained abit given that these methods are used as baselines that what is the (potential) advantage here? Also how well the spline flow itself would have performed by itself (simply minimizing $KL[q|p]$ where $q$ is the NF.) - Related to the previous issues, some discussion of speed of convergence, its stability, and hyperparameter sensitivity would have been a good improvement. Having an expectation inside the gradient term (Eq. 14) seems very sensitive. I would expect $K$ to have to be large? How sensitive was the choice of $K$? Other Comments Or Suggestions: - In Figure 4, the magenta line and the blue line are not very color blind friendly and are hard to separate I would advise to use different coloring scheme . Also please use legends and label the x and y axis in your figures. Questions For Authors: See weaknesses and Strengths Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and valuable suggestions. We have made several improvements to address your concerns: 1. **Improved Figures and Tables**: We have revised Figures 2, 3, and 4 for better clarity and ensured they are color-blind-friendly. Specifically, we have revised the [figure of Experiment 3](https://figshare.com/s/f45697639e8908df84dd?file=53348420) to better illustrate the performance differences. Additionally, we have added a [table](https://figshare.com/s/f45697639e8908df84dd?file=53348411) for clearer comparisons. The updated results now show that in Experiment 3, AISIVI and KSIVI outperform all other methods in terms of log marginal likelihood. 2. **Additional Baselines**: We have expanded our comparisons to include UIVI and IWHVI. Furthermore, for the multimodal example, we added a [normalizing flow baseline](https://figshare.com/s/f45697639e8908df84dd?file=53348414). The results demonstrate that normalizing flows either smooth out low-density regions or introduce additional fragments (as observed with the neural spline flow). 3. **Convergence Speed and Stability**: KSIVI remains the fastest in terms of convergence speed, but our method is the only one that matches its performance. Interestingly, AISIVI does not require very large inner batches if the proposal distribution (e.g., a normalizing flow) is sufficiently flexible and learns faster per iteration than KSIVI. Our framework efficiently supports computationally expensive proposal distributions, such as CNFs, since we do not need to backpropagate through the proposal distribution when computing path gradients. This property is one of our main novel contributions. However, without a proposal distribution, large batch sizes are indeed necessary, as demonstrated by BSIVI. 4. **Computational Cost Comparison**: Following the suggestion to investigate computational costs, we have now performed comparisons based on the Conditioned Diffusion Process task for all methods. To facilitate a fair comparison, we now use the total iterations of KSIVI (outer loop iterations × inner loop iterations) instead of the outer iterations as done in Sec 5.3. While this shows that AISIVI learns much faster per iteration than KSIVI (as seen in the table), it also highlights the extraordinarily fast implementation of KSIVI when comparing run times. Overall, the comparison clearly shows that only the gold standard KSIVI and our AISIVI are able to reach log marginal likelihood values >70,000, while all other methods lag behind significantly. This confirms the visual comparison of Fig. 4 in the paper and demonstrates that such performance levels can be achieved without relying on a kernel-based approach. 5. **Citations**: We will incorporate all the suggested references and are very grateful for your recommendations, which further enhance our paper’s positioning within the broader literature. Thank you again for your constructive feedback, which has helped us refine and strengthen our work. We hope these improvements address your concerns. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal and want to thank the authors for their response. Given the new results, I think the paper is good form, and I'm still happy to recommend acceptance. Regarding the concerns of other reviewers about baselines, it is hard for me to judge because I am not very familiar with this literature so I will ask AC to weigh other reviews more on this front.
Summary: Estimating the gradient of the KL divergence between SIVI models and (unnormalized) densities is the core difficulty for training SIVI models. Many efforts have been made to partially solve this problem using, e.g., MCMC, kernel methods, Monte Carlo sampling, etc. This paper presents a new method for training semi-implicit variational inference (SIVI) models based on importance sampling (IS). The most contribution of this paper is employing a learnable reverse model $\tau(\epsilon|z)$ to provide a variance-reduced estimate of the gradient. The experiments show that the proposed method has a comparable inference accuracy as baseline methods. Claims And Evidence: There are two important claims that are not verified by the experiments. - **On the high-dimensional application**. Line 57-59 explicitly states that 'In this work, ... enable us to train SIVI models even in high dimensions'. Line 151 states that "they can be scaled up to high dimensions". However, the largest dimension of the numerical examples is 100, which is not so large and is well studied by baseline methods (KSIVI). The authors should provide more high-dimensional evidence (e.g., AISIVI performs well and baseline methods fail) to support their claim. - **On the computational cost**. The authors emphasized the computational drawback of UIVI in Section 2.3 and "propose a novel method to fix the shortcomings" (Line 124). However, no detailed computational cost comparison, or the efficiency/accuracy trade-off is reported, making their claim unconvincing. Methods And Evaluation Criteria: Although using important sampling to estimate the gradient is a somewhat straightforward, the authors make theoretical analysis and careful designation which can be considered as novel contributions. However, there are some problematic point in the methodology: - Regarding the comparison against IWHVI (Soblev \& Vetrov, 2019), the author claims IWHVI is not applicable to the reparameterization trick. However, reparameterization trick is possible for IWHVI, as both the SIVI model and the reverse model are reparametrizable. I would like to see more explanation on this point. - The authors use an alternate optimization strategy for the SIVI model and importance distribution. A more natural choice is to optimize these two models simultaneously, and this strategy has been applied in VAE and IWHVI. It would be better to explain why an alternate optimization is considered here. Theoretical Claims: I've checked the correctness of the theoretical claims. Experimental Designs Or Analyses: ### **About the experimental designation** - **Missing high-dimensional application**. Please see 'claims and evidence.' - **Missing computational cost comparison**. Please see 'claims and evidence.' ### **About the baselines** As least the following two variants of SIVI should be considered: - UIVI. This paper "revisits" UIVI (see the title), by using important sampling to "fix the encountered shortcomings of UIVI" (Line 124). Therefore, UIVI should be considered as a very important baseline for the method. - IWHVI. This paper has the same idea of using importance sampling to estimate the ELBO in optimizing SIVI. ### **About the metrics** From the figures, AISIVI achieves very indistinguishable results compared to the baseline methods. In such a case, quantitative comparison (e.g., KL divergence, Wasserstein distance, ELBO, marginal likelihood) would be necessary to compare different methods. Supplementary Material: I reviewed the supplementary material. Relation To Broader Scientific Literature: Using importance sampling to improve the gradient estimation of SIVI is a meaningful effort, as this is a long-standing and difficult problem of SIVI. Essential References Not Discussed: No. Other Strengths And Weaknesses: I have no other comments. Other Comments Or Suggestions: One typo: There should be $\epsilon_i$ in the numerator of Eq. 15. Questions For Authors: I have no other questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and insightful feedback. We greatly appreciate your suggestions, and we have addressed the following points: 1. **High-dimensional Applicability**: We argue that the definition of "high-dimensional" depends on the specific goal. Finding an adequate approximation is achievable in higher dimensions, but our target for Experiment 3 was the true distribution. As shown in the newly added [table](https://figshare.com/s/f45697639e8908df84dd?file=53348411) and improved [figure](https://figshare.com/s/f45697639e8908df84dd?file=53348420), only our method and KSIVI from all SIVI variants come close to matching this distribution. Thus, for this scenario, 100 dimensions can be considered sufficiently high, as no other SIVI variant can achieve such good results. We will clarify in the final manuscript what we mean by 'high-dimensional' in our setting to avoid any confusion. 2. **Computational Cost**: We have now included UIVI in Experiment 3, and as you suggested, we’ve focused on the log marginal likelihood. It is clear that UIVI’s performance does not come close to AISIVI, even for a similar simulation budget. Note that we can only afford an inner batch size of 2 because of the expensive inner MCMC loop, which further emphasizes our point about drastically improved computational costs. Similar findings are reported in Appendix D of SEMI-IMPLICIT VARIATIONAL INFERENCE VIA SCORE MATCHING by Longlin Yu et al., which further supports our argument. Thank you for strengthening our paper with this suggestion. 3. **Reparameterization Trick and IWHVI**: We have corrected our previous claim, and indeed, the reparameterization trick can be applied to IWHVI. We appreciate your clarification on this point. 4. **Optimization Strategy**: The alternate optimization scheme is a key aspect of our method and one of the main contributions. By using the path gradient and alternating optimization, we avoid backpropagating through the proposal distribution. This allows us to process arbitrarily large inner batches, whereas a simultaneous optimization approach is limited by memory constraints. This design enables us to use a more computationally expensive proposal distribution (conditional normalizing flow), and as shown in our updated experiments, AISIVI outperforms IWHVI in terms of performance within a similar computational budget. This now serves as an ablation study comparing simultaneous and alternating training approaches. We have also included IWHVI as a strong baseline, as suggested. 5. **Typo**: We have corrected the typo in Eq. 15, as you pointed out. Thank you again for your valuable feedback and for helping to improve the clarity and strength of our paper. We hope these changes address your concerns effectively. Should the reviewer find the response satisfactory, we would appreciate reconsidering the initial score. Otherwise, we remain fully committed to addressing any remaining concerns during the second author response phase. --- Rebuttal Comment 1.1: Comment: Thank you for the additional results. For table 2 in your rebuttal, I still have some concerns. - On which experiment do you report these marginal likelihoods? and the device type? - As you have argued, IWHVI can be considered as a simultaneous ttaining ablation for AISIVI. Then why IWHVI uses more time than AISIVI? - I suggest the authors to give a more comprehensive study about the training cost, maybe a dimension scaling curve, since they have made strong claims on the reduced computational cost of AISIVI. Regarding the high-dimensional experiments, I still think such a numerical example (e.g., Bayesian neural network) would greatly strengthen the paper. However, it still looks good if other advantages are demonstrated. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading our rebuttal and providing additional feedback. We address the mentioned comments in the following point-by-point: 1. **Marginal Likelihoods and Device Details**: As mentioned in the previous response, the marginal likelihoods refer to Experiment 3 (the conditioned diffusion process). The updated figures illustrate where other methods fail in contrast to AISIVI and KSIVI. Regarding the device, as stated in the main text, the experiments were conducted on a Linux-based A5000 server with 2 GPUs, 24GB VRAM, and an Intel Xeon Gold 5315Y processor with a clock speed of 3.20 GHz. 2. **Training Time and Batch Size Adjustment**: We thank the reviewer for this valid comment. We, however, think that there is a misunderstanding. In this experiment, we wanted to compare the performance of models when fixing the budget of all methods to address the "efficiency/accuracy trade-off" you requested in your initial review. More specifically, we provided all methods with 10k iterations and approximately the same computational budget as AISIVI, which is reflected in the runtime data. To ensure a fair comparison, we fixed the outer batch size (number of sampled z) for all SIVI methods and adjusted the inner batch size (number of sampled epsilons) until we achieved approximately the same iterations per second as AISIVI. For IWHVI, this resulted in an inner batch size of 7000. As shown in the IWHVI paper, larger inner batch sizes tend to improve performance. 3. **Computational Cost Claims**: Our main claim in the paper is that AISIVI vastly outperforms UIVI in terms of efficiency. This is clearly shown in the updated Experiment 3 (in the rebuttal). In that experiment, we could only afford an inner batch size of 2 for UIVI, resulting in significantly worse performance compared to AISIVI. However, we are happy to provide a dimensional scaling plot in a revised version of the paper to further underpin this claim should the reviewer require further evidence. We hope these points help to clarify the additional comments by the reviewer. Thank you again for your thoughtful feedback! We appreciate the reviewer’s suggestion regarding a high-dimensional example. We recognize its value in further illustrating our findings and are currently exploring the feasibility of incorporating such a case within the scope of this revision.
Summary: This paper revisits Unbiased Implicit Variational Inference (UIVI), which has been largely dismissed due to its computational cost and imprecision from the inner MCMC loop. The authors propose replacing MCMC with importance sampling. By minimizing the expected forward Kullback–Leibler divergence, they ensure an unbiased estimation of the score gradient, making SIVI more efficient in high-dimensional settings. Authors provide detailed derivations of their proposed methods with appropriate proofs and algorithms. Experimental results show that their approach outperforms or matches state-of-the-art SIVI methods, advancing both theoretical understanding and practical implementation of variational inference. Claims And Evidence: Good. Methods And Evaluation Criteria: Yes. Theoretical Claims: Glanced at the derivations but not into their details. Experimental Designs Or Analyses: * For experiment 1, there is no comparison with other methods. Maybe it is unnecessary since all methods might doing well on this. * For experiment 2 and 3, although there are comparisons with other methods, I didn't find a clear benefit of the proposed methods. From the results and the figures, it seems like all methods are doing equally well. Then what are the real benefits of the proposed methods? Supplementary Material: * The appendix is good. * However, it seems like there is no code provided. Relation To Broader Scientific Literature: / Essential References Not Discussed: / Other Strengths And Weaknesses: * The method derivation in this paper is good and solid. * The visualization of the experiment part is clear. Other Comments Or Suggestions: * A lot of $\in$ should be $\subseteq$ in this paper. * Eq. (2) $f_\phi(\boldsymbol \epsilon) \sim q_{\boldsymbol y}$ is a confusing equation. Should be $y = f_\phi(\boldsymbol \epsilon)$. Questions For Authors: * For experiment 2, authors plot the measured correlation of $\beta$ in Fig. 3. What does a negative correlation mean? If the authors want to show that after matching with the true parameter of SGLD, the authors should plot the learned $\beta$ and the true $\beta$ after matching and showing that they form a diagonal line. But what authors plot is a pairwise correlation of the learned $\beta$ and the true $\beta$. If plotting correlation, I would hope to see more close to 1 correlations, after matching the learned with the true. Could authors explain more about this figure and what it implies. Thanks! * In table 1, a close to 0 KL divergence means a good approximated distribution. However, the overall magnitude of the KL divergence may vary, depending on the nature of the true distribution. Is there a way to normalize the KL divergence? Otherwise, people don't know whether a number like 0.8 is close to 0 or not good enough, which will not be helpful compared with a direct visual comparison. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: 1. **Experiment 1 Comparison**: We agree that no comparison is needed for Experiment 1, as it primarily serves as a sanity check rather than a competitive benchmark. 2. **Correlation in Experiment 2**: We computed all possible correlations separately for the true $\beta$ and the estimated $\beta$, considering each coordinate individually. A negative correlation, $\rho_{ij}$, means that $\beta_i$ and $\beta_j$ (where $\beta$ represents here for example the true $\beta$ in this case) are negatively correlated. After obtaining these correlations for both the true and estimated $\beta$, we matched them so that the best possible outcome would form a diagonal line, ensuring alignment between the learned and true $\beta$. In Eq. 36, this can also be seen since the correlations are always computed with respect to a fixed parameter vector. We will adapt the manuscript to make this process clearer for the reader. Our results demonstrate the state-of-the-art performance among SIVI variants. We will clarify this point again in the paper and thank the reviewer for the comment. 3. **Experiment 3 Evaluation**: > “From the results and the figures, it seems like all methods are doing equally well.” In Experiment 3, we included a [table](https://figshare.com/s/f45697639e8908df84dd?file=53348411) demonstrating that only our proposed method (AISIVI) performs on par with the state-of-the-art KSIVI method. Additionally, we have improved the [figure](https://figshare.com/s/f45697639e8908df84dd?file=53348420) to better highlight that AISIVI is the only method capturing variance as effectively as KSIVI. 4. **Code Availability**: We have inquired with the area chair about the permissibility of providing the code currently hosted on (Anonymous) Github (but the rebuttal rules only allow figures and tables to be linked). 5. **Notation and Equation Clarifications**: The use of subset notation (⊆) is indeed correct, given our definition of $\mathcal{R}$. However, we recognize that this notation might lead to misunderstandings and will adapt it to a more standard form. We have also incorporated the suggested clarification for Eq. (2). 6. **KL Divergence Normalization**: As far as we know, there is no standard way to normalize KL divergence. However, to provide better intuition, we have added a [figure](https://figshare.com/s/f45697639e8908df84dd?file=53348417) depicting different training states alongside their KL values, helping to contextualize the reported numbers. We hope these clarifications address your concerns and demonstrate our commitment to improving the manuscript. Thank you again for your valuable feedback. Should the reviewer find the response satisfactory, we would appreciate reconsidering the initial score. Otherwise, we remain fully committed to addressing any remaining concerns during the second author response phase. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal, I'm not very confident, but most of my concerns have been addressed, and I have raised my score from 2 to 3.
null
null
null
null
null
null
LowRA: Accurate and Efficient LoRA Fine-Tuning of LLMs under 2 Bits
Accept (poster)
Summary: The authors of this paper tackle the important problem of LLM quantization, enabling fine-tuning below 2 bits per parameter with minimal performance loss. This is achieved through the proposed LowRA framework, which addresses three key challenges in quantized LoRA fine-tuning: coarse-grained precision assignment, discrepancy in data distribution, and lack of high-performance quantization primitives. The LowRA framework effectively resolves these challenges via different techniques such as per-output-channel quantization, groupwise normalization, data-free post-training quantization, per-output-channel thresholds and mappings, data-free one-shot post-training quantization, and user-defined compression ratios. Furthermore, the authors provide practical CUDA-based primitives for seamless implementation. The paper's experimental results demonstrate that LowRA outperforms baselines at the same precision and achieves similar performance to baselines at lower precisions. Furthermore LowRA results in substantial memory savings. This makes LowRA a promising solution for embedded systems and mobile devices. # Update after rebuttal As mentioned in the comment below, I have decided not to update my scores. Claims And Evidence: Yes, the claims made in this paper are supported through clear and convincing evidence. In particular the authors performed experiments using four LLMs and four tasks. Furthermore, the authors provided ablation studies of the different components in LowRA which is an important addition to the study. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria are ideal for the application in hand. The metrics used are clearly defined and are commonly used in this setting. Theoretical Claims: N/A. The paper does not make any theoretical claims. Experimental Designs Or Analyses: The main experimental results are highlighted in Table 2. I have the following comments/questions for the authors here: * LoftQ outperforms LowRA in terms of accuracy for 4 Bit LLaMA-2-7B in WikiText-2 and in terms of ROUGE1 for 4 Bit BART-large for XSUM. I might have missed this but am curious to know the authors thoughts in this regard. * 4Bit LLaMA-2-13B has the same performance for all quantized LoRA fine-tuning. I am curious regarding why this is the case? * The authors metnion PiSSA and ApiQ in the background section. Clear comparison against these methods (even in the supplementary section) would make the claims of this paper even stronger. Supplementary Material: Yes, I briefly went over the supplementary materials including sections A-K. I paid closer attention to sections B (Ablation Studies), C (Memory Requirements) and H (Parameter Variances ...) Relation To Broader Scientific Literature: The findings of this paper allows for LLMs to be adopted for systems with low memories such as embedded systems and mobile devices. This expands the use case of LLMs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths * The problems of the current state of quantized LoRA fine tuning is clearly highlighted. * The LowRA framework is clearly defined * The ablation study is very helpful Weaknesses * It would help readability if the different components of the LowRA framework were explicitly mapped to the three limitations tackled in this paper. * In many cases authors use data-free post-training which helps with the generalizability of the approach. However, for embedded systems fine-tuning with data can be helpful in some cases. I would like to know the authors thoughts in this aspect. Other Comments Or Suggestions: N/A Questions For Authors: N/A. I have already posed my questions in the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer BPFc for their insightful feedback! Below, we address the comments from the **Weaknesses** section in detail. We will incorporate your valuable feedback into the revised manuscript. --- ## Weakness 1: Clarity on LowRA’s Components > **Reviewer Concern**: The paper would benefit from more explicit mapping between the three limitations tackled and the different components of the LowRA framework. **Our Response**: We will clarify how each component of the LowRA framework aligns with each of the three limitations. Specifically: 1. **Limitation 1 (Coarse-Grained Precision Assignment)** - **P2: Precision Assigner** - **T3: Precs** 2. **Limitation 2 (Discrepancy in Data Distribution)** - **P1: Mapping and Threshold Learner** - **T2: Fine-Grained Mappings and Thresholds** - **P4: Low-Rank Initializer** and **T5: Intelligently Initialized Low-Rank Tensors** (the low-rank initialization helps mitigate quantization errors) 3. **Limitation 3 (Lack of High-Performance Quantization Primitives)** - **P3: Output-Channelwise Quantize Kernel** - **P5.1: Output-Channelwise Dequantize Kernel** We will include both graphical annotations and supporting text in the paper to further illustrate these mappings. --- ## Weakness 2: Fine-Tuning with Data in Embedded Systems > **Reviewer Concern**: Many approaches favor data-free post-training for generalizability. However, in embedded systems, fine-tuning with data can be more practical. How can LowRA adapt to such scenarios? **Our Response**: We interpret your comment as referring to scenarios where there is some prior knowledge of the tasks for which the LoRA base weight will be used. In embedded systems, these base weights might be: ### Case 1: Dedicated to a Single Task - **Using Task-Specific Data for Post-Training Quantization (PTQ).** If the base weights are used solely for one task, that task’s dataset can serve as a calibration set to learn precision assignments, thresholds, mappings, and rounding schemes. - **Quantization-Aware Training (QAT).** Another possibility is QAT. Preliminary experiments, however, showed that QAT did not outperform PTQ in terms of perplexity/accuracy—likely because LoRA’s low-rank adapters help compensate for quantization errors. Moreover, QAT demands more memory and introduces latency overhead. ### Case 2: Shared Across Multiple Tasks - **Representative Calibration Set for PTQ.** If multiple tasks share certain characteristics, a representative calibration set may be synthesized for PTQ to learn shared precisions, thresholds, mappings, and rounding schemes. - **Possible QAT Approaches.** While still possible, QAT in multi-task scenarios faces the same drawbacks regarding memory and computational overhead. Moreover, the learned precisions, thresholds, mapping, and rounding schemes may overfit the calibration set and fail to generalize to each individual downstream task. ### Other Scenarios and Future Directions - **Encoding vs. Decoding Schemes.** One strategy is to fix encoding (thresholds) globally and fine-tune only the decoding (mappings) per task. However, preliminary implementations showed significant latency overhead (e.g., scatter-add operations). - **Future Extensions.** We will continue exploring these ideas, focusing on deeper comparisons between PTQ and QAT under varying deployment constraints and use cases. --- ## Conclusion We appreciate your feedback and have outlined our approach to making LowRA clearer and more applicable to embedded scenarios. Your comments have inspired further investigations and future directions, which we look forward to sharing in subsequent work. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. In particular, I appreciate them replying to my concerns. However, after going through the other reviews and comments I have decided not to update my score at this time. --- Reply to Comment 1.1.1: Comment: Dear Reviewer BPFc, We deeply thank you for your prompt and kind response. Thank you for your effort in putting together these valuable reviews. Your feedback has helped us tremendously. We unanimously agree that your observation on learning fine-grained quantization schemes with data for embedded settings opens up a lot of valuable research opportunities for us and the community at large. **In the following days, we would appreciate if you could let us know further questions you find interesting or important. We would try our best to address these questions.** We understand that it has been a lot of effort reading through the submissions and putting together reviews. We sincerely appreciate your effort. Hope you enjoy your days! Look forward to your response. Best, Submission13809 Authors
Summary: This paper introduces LowRA, a novel framework that enables LoRA fine-tuning below 2 bits per parameter while maintaining model performance. The work addresses three key limitations of existing quantized LoRA methods through innovative techniques: fine-grained precision assignment, adaptive quantization mapping/thresholding, and efficient CUDA kernels for low-bit execution. The authors demonstrate significant memory savings (up to 50%) while achieving comparable or better performance compared to state-of-the-art methods. ## update after rebuttal Thank you for the author's response. After referring to the author's response and the comments of other reviewers, I decided to keep my score. Claims And Evidence: The paper's claims are well-supported by comprehensive experimental results across multiple models (LLaMA-2-7B, 13B, 30B, BART-large) and tasks. The authors provide extensive ablation studies and detailed analysis of each component's contribution. The performance improvements and memory savings are clearly documented with quantitative metrics. Methods And Evaluation Criteria: The proposed methods are technically sound and well-motivated. The evaluation methodology is comprehensive, including: 1) Multiple model architectures and sizes 2) Various downstream tasks 3) Standard metrics (perplexity, ROUGE scores) 4) Detailed ablation studies 5) Memory usage analysis Theoretical Claims: The theoretical foundation is solid, with clear mathematical formulations for: 1. Weighted Lloyd-Max quantization 2. Two-level ILP-based precision assignment 3. Channelwise precision optimization 4. Memory footprint analysis Experimental Designs Or Analyses: The experimental evaluation is thorough and convincing: 1. Comprehensive baseline comparisons 2. Extensive ablation studies 3. Clear performance benchmarks 4. Detailed memory analysis 5. Multiple model scales tested Supplementary Material: The supplementary material is comprehensive, including: 1. Detailed CUDA kernel implementations 2. Extended ablation studies 3. Complete hyperparameter settings 4. Additional experimental results 5. Memory usage analysis Relation To Broader Scientific Literature: The work builds upon and advances existing research in: 1. LoRA fine-tuning 2. Model quantization: QLoRA, LoftQ 3. Efficient LLM deployment 4. Low-bit neural networks Essential References Not Discussed: The paper adequately covers the relevant literature in LLM quantization and fine-tuning. Other Strengths And Weaknesses: **Strengths:** 1. First framework to enable LoRA fine-tuning below 2 bits. Achieved 1.75 bits on LLaMA-2-7B/13B and BART-large, and 1.15 bits on LLaMA-30B with competitive performance. No previous methods support sub-2-bit operation. 2. Superior performance-precision trade-off. At 2-bit quantization, LowRA achieves perplexity reduction of 2.21 (WikiText-2) and 1.45 (Open-Assistant) over QLoRA, and 1.76 (WikiText-2) and 1.12 (Open-Assistant) over LoftQ. 3. Significant memory savings. Reduces memory usage by 30-50% compared to 4-bit baselines: - 40% lower memory for LLaMA-2-13B inference (2-bit vs 4-bit) - 30% lower for LLaMA-2-7B inference - 50% reduction for LLaMA-30B at 1.15 bits 4. Theoretically sound precision assignment strategy. Two-level ILP formulation with proven convergence, validated by ablation studies showing effectiveness of precision assigner. 5. Hardware-friendly design. Supports deployment on resource-constrained devices like Raspberry Pi 4 (4GB RAM) for LLaMA-2-7B and Tesla T4 (16GB VRAM) for LLaMA-30B. **Weaknesses:** 1. Limited analysis of computational overhead during training. Paper focuses primarily on memory savings, with limited discussion of training time impacts in Section 7 and Appendix E. 2. Incomplete exploration of ultra-large models. Experiments stop at LLaMA-30B, without testing on larger models like LLaMA-65B or GPT-3 (175B). 3. Figure 10 shows variance analysis but lacks detailed explanation of impact on different layer types. 4. While training curves are shown, there's limited discussion of potential instability issues during ultra-low-bit training. 5. Impact on inference latency not thoroughly analyzed. Memory reduction is well-documented, but timing implications are not comprehensively measured and reported. Other Comments Or Suggestions: None Questions For Authors: Please refer to Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer oGG6 for their thorough review. We are glad that they find our work valuable overall. Below, we address the key concerns. We appreciate your feedback and will integrate it—along with additional experimental findings—into the revised manuscript. --- ## Training Overhead We benchmarked **LowRA** at various bit widths against **QLORA (4 bits)** on both an **A5000 (24GB VRAM)** and **A100 (80GB VRAM)**. ### Llama 7B A500, Runtime (ms) | Seq. Len. | Batch Size | 1.5 Bit | 2.5 Bit | 4 Bit | QLORA (4 Bits) | Max % Overhead | |--------|-------|----------|-------|-------|------------|--------| | 256 | 1 | 782.67 | 789.37 | 782.67 | 754.27 | 4.65% | | 512 | 1 | 1291.75 | 1295.82 | 1291.75 | 1268.50 | 2.15% | | 1024 | 1 | 2398.10 | 2398.96 | 2398.10 | 2383.68 | 0.64% | | 256 | 2 | 1268.17 | 1273.97 | 1268.17 | 1247.15 | 2.15% | | 512 | 2 | 2319.83 | 2322.01 | 2319.83 | 2304.82 | 0.75% | ### Llama 13B A5000, Runtime (ms) | Seq. Len. | Batch Size | 1.5-bit | 2.5-bit | 3-bit | 4-bit | QLORA (4-bit) | Max % Overhead | |-----|-------|------|------|------|------|------|--------| | 256 | 1 | 1494.69 | 1494.69 | 1501.24 | 1508.03 | 1444.20 | 4.42% | | 512 | 1 | 2486.00 | 2486.00 | OOM | OOM | OOM | N/A | | 256 | 2 | 2457.75 | 2457.75 | OOM | OOM | OOM | N/A | ### Llama 7B A100, Runtime (ms) | Seq. Len. | Batch Size | 1.5-bit | 2.5-bit | 3-bit | 4-bit | QLORA (4-bit) | Max % Overhead | |----|------|--------|-------|---|-----|-------|-----| | 256 | 1 | 632.09 | 632.36 | 631.85 | 639.31 | 589.12 | 8.52% | | 512 | 1 | 1073.68 | 1074.60 | 1072.91 | 1079.48 | 1030.78 | 4.72% | | 256 | 2 | 1056.94 | 1056.22 | 1055.86 | 1063.54 | 1014.91 | 4.79% | LowRA introduces minimal overhead (with a maximum of 8.52%) and supports configurations **QLORA** cannot run without out-of-memory (OOM) errors. --- ## Results on Ultra-Large Models We evaluated **LowRA** on **LLaMA-65B** (1.15 bpp), achieving perplexities of **7.49** (WikiText) and **4.97** (OpenAssistant), outperforming **LLaMA-33B’s** **8.00** and **5.73** under identical hyperparameters. This confirms **LowRA’s** scalability to ultra-large models. --- ## Variance Trends Across Different Layer Types We will supplement our paper with fine-grained plots motivating output-channelwise quantization. --- ## Training Instability Even at 1.15 bits, we observed no instability or divergence, largely thanks to (1) LoftQ’s alternating SVD-based initialization of low-rank tensors and (2) freezing base weights while only updating low-rank tensors. We will include stability discussions in our paper. --- ## Inference implications **Inference implications involve three primary aspects: Loading Latency, Time to First Token (TTFT), and End-to-End Throughput.** We discuss each part here. 1. **Loading Latency** This is the time to load model weights into GPU memory. On an RTX A4000 (LoRA branch in float32), LowRA reduces loading latency significantly, as shown in the table: | Framework | Bits/Param | Loading Latency | |-----|------|-------| | QLoRA | 4 | 220.73 s | | LowRA | 4 | 218.17 s | | LowRA | 1.5 | 157.51 s (28.6% reduction) | 2. **TTFT** TTFT is the delay before the first token is produced. While smaller models transfer less data and slightly reduce TTFT, it's primarily compute-bound, so quantization alone provides modest gains. 3. **End-to-End Throughput** Once the model is loaded and past TTFT, throughput (tokens or requests per second) becomes the key factor. ### Llama 7B on 1 RTX 3080 (unit: tokens / sec; float32) |Prefill Length | Decode Length | 1.5 Bit | 2 Bit | 2.5 Bit | 4 Bit | QLoRA (4 Bit) | |------|----|---|----|-------|-------|-----| | 100 | 10 | 32.16 | 30.33 | 24.35 | 9.19 | 9.40 | ### Llama 13B on 1 A4000 (unit: tokens / sec; float32) | Prefill Length | Decode Length | 1.5 Bit | 2 Bit | 2.5 Bit | 4 Bit | QLoRA (4 Bit) | |----|-----------|------|----|------|--------|-----| | 100 | 10 | 16.11 | 14.46 | 13.64 | 12.17 | 11.56 | Across **Llama 7B** on an **RTX 3080**, 1.5-bit LowRA delivers a **3.42× throughput increase** over QLoRA (32.16 vs. 9.40 tokens/sec), while on Llama 13B using an **RTX A4000**, **1.5-bit LowRA** still achieves a **1.39× speedup**—demonstrating that sub-2-bit quantization offers clear performance gains across varying model sizes. These results confirm that LowRA provides **real-world throughput gains beyond the documented memory savings**.
Summary: This paper introduces **LowRA**, a novel framework for LoRA-based fine-tuning of LLMs in ultra-low bit (sub-2-bit) settings. LowRA is the first to enable LoRA fine-tuning at or below 2 bits with only minor accuracy/perplexity losses and achieves considerable memory savings (30–50%). The authors observe that current quantized LoRA methods (e.g. QLoRA, LoftQ) typically fail or degrade significantly below 2-4 bits. To address this, LowRA proposes a fine-grained, mixed-precision quantization strategy that - (a) learns quantization thresholds and representative values (mappings) via a weighted Lloyd-Max procedure, (b) uses a two-level ILP solver to assign channel-wise precisions, and (c) includes optimized low-bit CUDA kernels tailored to LoRA. Experiments on Llama-2 (7B, 13B parameters) and BART-large, across tasks including WikiText-2, OpenAssistant, CNN/DailyMail, and XSum, with additional experiments on Llama 33B show that LowRA outperforms baselines in the performance-precision trade-off. Notably, LowRA is the first method to enable Llama-30B fine-tuning on a single NVIDIA Tesla T4 (16GB VRAM). Claims And Evidence: - **Claim A:** LowRA can fine-tune LLMs below 2 bits (down to ~1.15 bits in some cases) without catastrophic performance degradation. **Evidence:** Empirical results on Llama-2–7B and 13B (down to ~1.75 bits) [from Table 2] and Llama-33B (down to ~1.15 bits) [from Table 3] show perplexities close to baseline 2–4-bit methods. - **Claim B:** LowRA yields superior or on-par performance compared to existing sub-4-bit baselines (QLoRA, LoftQ) while using fewer bits. **Evidence:** On WikiText-2 and OpenAssistant, LowRA at ~2.5 bits approaches or exceeds the performance of QLoRA/LoftQ at 4 bits. [from Table 2] - **Claim C:** LowRA substantially reduces memory usage (30–50%), valuable for resource-constrained fine-tuning and on-device inference. **Evidence:** The authors detail memory estimates for Llama 2-7B,13B, & Llama-33B) in Appendix C. Methods And Evaluation Criteria: 1. WikiText-2 (language modelling) and OpenAssistant (multi-turn conversation) are evaluated via perplexity. 2. XSUM and CNN/DailyMail (summarization tasks) are measured with ROUGE scores. *Datasets make sense for the problem at hand.* *Perplexity and ROUGE are among the most widely accepted metrics for these tasks. They ensure that both language generation quality (summarization) and text coherence/prediction (perplexity) are rigorously tested.* Theoretical Claims: The main theoretical component is the ILP-based assignment of channel-wise precisions. While no formal proof of optimality is given for the entire method’s synergy with LoRA - the authors do provide a well-defined ILP objective and a hierarchical solution scheme. The Weighted Lloyd-Max approach is grounded in standard quantization theory. Everything is consistent with known quantization methods; there are no obviously incorrect proofs. The approach is algorithmic rather than strictly theoretical, and the correctness of the optimization steps seems reasonable. Experimental Designs Or Analyses: The experimental designs and analyses seem fine. Supplementary Material: While no supplementary material was included, I did review the appendices (A-K). Relation To Broader Scientific Literature: LowRA builds directly on the concept of parameter-efficient fine-tuning (PEFT), esp. LoRA (Hu et al., 2021), and extends quantized LoRA approaches like QLoRA and LoftQ. It adopts groupwise quantization ideas from prior work (e.g., GroupWise Normalization in QLoRA), but pushes further by combining a Weighted Lloyd-Max algorithm for adaptive thresholding with two-step ILP-based precision assignment, enabling sub-2-bit training. This unified approach addresses limitations observed in earlier methods (which were mostly restricted to 2-4 bits) and draws on established mixed precision and quantization techniques. Essential References Not Discussed: None - to the best of my knowledge. Other Strengths And Weaknesses: **Strengths** 1. Empirical gains are convincing - particularly that performance remains stable even in ultra-low bit settings below 2 bits. 2. Overheads of the framework components are discussed. (Appendix E) 3. In the low bit range (2-4), the proposed framework outperforms LoftQ and QLora. (Table 2.) 4. The paper is easy to read and follow. Good overall presentation. esp. sections 5 and 6 - the core components of the paper. **Weaknesses** 1. Although the framework supports sub-2-bit fine-tuning, insights into how sub-2-bit quantization affects LLM behaviour (e.g., interpretability, generalization) are limited. Some fundamental insight would be valuable, not just system performance. 2. While the paper makes progress in enabling ultra-low-bit LoRA fine-tuning and serving, it's not clear whether such ultra-low-bit quantized models outperform smaller models quantized at slightly higher bit widths. For instance, from Table 2, the **2-bit LowRA Llama-2-7B** (3808MB as from Figure 8) is comparable to the **1.8-bit LowRA Llama-2-13B** (5627MB as from Figure 8). Smaller models seem like a better choice than larger, more aggressively fine-tuned models esp. at lower bit widths scenarios. This raises an important question of whether it's always better to fine-tune larger models at lower precision, or whether smaller models at slightly higher precision might offer a better trade-off in both accuracy and throughput. A deeper discussion or analysis of this trade-off would strengthen the paper. 3. The motivation of the paper is ultra-low-bit fine-tuning for resource-constrained environments. Yet, all experiments were done on high-end A100. It would be valuable to include at least some inference-time measurements (e.g., latency, throughput, memory footprint) on representative low-resource hardware. This would help contextualize the practical system-level benefits and potential overheads of the proposed CUDA kernels and fine-grained quantization approach. Other Comments Or Suggestions: Please see the word spacing for "Data-Free Post-Training Quantization". (~line 141) Please fix the spelling error - "Quantzation". (Figure 5 caption) Summarization Summarization (~line 699) The mislabeled memory footprint of Llama 33B. (Figure 9) Please refer to Meta's first generation of mid-tier model as Llama-33B (as was officially announced). There are numerous references to Llama-30B. The right y-axis for the perplexity score exaggerates the drop visually. A full-scale axis would be a good choice, esp. for readers assessing quality vs. compression trade-offs. (Figure 7/8.) In Algorithm 1, it would help to clarify the definitions of inputs like N, w(1),…,w(K). W_k and W_sum. Questions For Authors: In many real-world settings, the cumulative resources required for inference in deployment far exceed the one-time costs of fine-tuning. **Q1.** Do you have inference throughput benchmarks (e.g., tok/sec) for inference with and without LowRA’s sub-2-bit quantization? This would clarify real-world gains beyond memory savings. **Q2.** Can ultra-low-bit quantization with LowRA justify choosing a larger model (e.g., 13B @ 1.5 bits) over a smaller 7B model at 2–4 bits (with LoftQ)? Any guidance on this trade-off? Thanks. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer 7mTX for their thorough and insightful review. We are glad that 7mTX finds our empirical gains convincing and our paper easy to read and follow. We address all feedback below and will incorporate it, along with new experimental findings, into the revised paper. --- ## Q1: Inference metrics on consumer-grade hardware Our experiments target GPUs that are relatively constrained for large language model (LLM) inference—an **NVIDIA RTX 3080** with **10GB VRAM** and an **NVIDIA RTX A4000** with **16GB VRAM**—to underscore how LowRA can unlock efficient inference without requiring high-end data-center hardware. ### Llama 7B on RTX 3080 (unit: tokens / sec; float32) | Metric | Prefill Length | Decode Length | 1.5 Bit | 2 Bit | 2.5 Bit | 4 Bit | QLoRA (4 Bit) | |---------|----------------|---------------|---------|--------|---------|--------|---------| | **Throughput**| 100 | 10 | 32.16 | 30.33 | 24.35 | 9.19 | 9.40 | | **Speedup** | 100 | 10 | 3.42x | 3.23x | 2.59x | 0.98x | 1.00x | - At **1.5 bits**, throughput reaches **32.16 tokens/sec**, yielding a **3.42× speedup** over QLoRA (9.40 tokens/sec). - Even at **2 bits**, LowRA surpasses QLoRA by over **3×**, indicating that more aggressive quantization pays off on memory-limited hardware. ### Llama 13B on A4000 (unit: tokens / sec; float32) | Metric | Prefill Length | Decode Length | 1.5 Bit | 2 Bit | 2.5 Bit | 4 Bit | QLoRA (4 Bit) | |---------------|----------------|---------------|---------|--------|---------|--------|---------------| | **Throughput**| 100 | 10 | 16.11 | 14.46 | 13.64 | 12.17 | 11.56 | | **Speedup** | 100 | 10 | 1.39x | 1.25x | 1.18x | 1.05x | 1.00x | - For **Llama 13B**, 1.5-bit LowRA achieves **1.39×** speedup over QLoRA, and 2-bit still delivers a **1.25×** improvement. - Though the gains are smaller than on the 7B model—likely due to higher overall computation overhead—LowRA still outperforms QLoRA across all tested bit-widths. These results confirm that our sub-2-bit LowRA quantization provides real-world throughput gains beyond the documented memory savings. --- ## Q2: The tradeoff of model size and compression ratio. In practice, users often start with a specific model, a fixed hardware memory budget, and particular tasks in mind. LowRA is designed to help maximize performance under those given constraints. While we do observe that ultra-low-bit quantization can make it feasible to run larger models within limited memory, our current study does not fully investigate the potential trade-off between choosing a bigger model at extremely low bits (e.g., 13B at 1.5 bits) and opting for a smaller model at higher bits (e.g., 7B at 2-4 bits). We see further exploration of this bits-per-parameter versus performance trade-off - along with automatic selection of the optimal model-compression combination - as an important direction for future research. Moreover, our work serves as a stepstone for future works to further optimize the task performance of models in the lower bit range. --- ## Presentation Errors: Graphs, Text, and Algorithms Thank you for your attention to details! We have fixed all of the presentation errors you pointed out. We will iterate through our draft rigorously to ensure our presentation is accurate. 1. **Typographical and Spacing Fixes** - Corrected spacing for “Data-Free Post-Training Quantization.” - Fixed spelling errors (e.g., “Quantzation” → “Quantization,” “Summarization Summarization” → “Summarization”). 2. **Figure and Label Updates** - Corrected the mislabeled memory footprint for Llama 33B in Figure 9. - Standardized references to “Llama-33B” (instead of “Llama-30B”). - Adjusted the y-axis to a full-scale view in Figures 7 and 8, ensuring a clearer comparison of perplexity scores. 3. **Algorithm Clarifications** We have added clarifications to the algorithm. Specifically, $N$ refers to the total number of output channels in the model being quantized. The values $\{ w^{(1)}, \dots, w^{(K)} \}$ represent the distinct parameter sizes (i.e., the number of parameters) across all output channels. The set $I_k$ contains the channel indices belonging to size group \(k\). Each channel \(i \in I_k\) has a parameter count of $w^{(k)}$. We define: $$ W_k = \sum_{i \in I_k} w^{(k)}, $$ which is the sum of parameter counts for all channels in partition $I_k$. Finally: $$ W_{\text{sum}} = \sum_{k=1}^K W_k $$ is the total parameter count across all partitions (i.e., the total number of parameters in the model to be quantized). --- Rebuttal Comment 1.1: Comment: I sincerely thank the authors for their time and effort in addressing my questions. particularly for sharing the inference-related implications. LowRA unlocks more quantization choices (below 2 bits) for LLMs - which naturally raises this question of finding the optimal model-compression combination. While this tradeoff does not fall within the scope of this paper, I take it as a positive sign that more works could be built on this paper to explore more in this domain. I will update my recommendation to 3. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 7mTX, Thank you for your prompt response! It is our pleasure to have this submission reviewed by you. Your sharp and detailed feedback has helped us crucially in furthering and enhancing this line of research. In particular, your question on the inference metrics has prompted us better complete our work in incorporating aspects of system performance. Moreover, your astute consideration on the tradeoff between original model size and compression ratio has pointed us (and the quantization community) to a new/yet-to-be-investigated topic of research. **In the coming days, we would like to address any further questions/concerns you may have. We would try our best to enhance our work to be better received by the community.** We understand it is a lot of effort putting together these valuable reviews that attend to so many details. We sincerely appreciate your effort in cultivating a healthy, mutually-motivating community. Best wishes, Submission13809 Authors
null
null
null
null
null
null
null
null
Exploring Large Action Sets with Hyperspherical Embeddings using von Mises-Fisher Sampling
Accept (poster)
Summary: The paper considers exploration in large action spaces where simple baselines like epsilon-greedy are impractical. The authors motivate that prior SoTA on this problem uses approximate nearest neighbor search to inform a truncated version of Boltzmann exploration, which does not have a clean theoretical characterization. By contrast, the authors propose Von Mises Fisher exploration to directly sample the action embedded in a unit sphere, with exploration asymptotically matching that of Boltzmann exploration. This is combined with experimental evidence to support the theory along with a theoretical comparison to boltzmann exploration. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes Theoretical Claims: I believe the assumption of uniformity in the distribution of the action embeddings makes the result less interesting. On the other hand, with a heavily imbalanced sample of the action embeddings, a very low probability embedding could nevertheless end up being sampled much more significantly with the proposed strategy due to a large voronoi cell attracting the nearest neighbor search to it, which would nevertheless not get sampled in vanilla Boltzmann exploration. Having said that, I believe the authors allude to something like this in the limitations and future work section. Experimental Designs Or Analyses: The authors explore the performance for non iid embeddings like the Glove Vectors in the appendix and a real world deployment where they claim it showed benefits, though the baseline is unclear. While the authors compare with Boltzmann exploration, given the motivation in the introduction it would be interesting to also compare to the truncated boltzmann distribution (similar to the wolpertinger policy) which seems to be missing. Supplementary Material: Not in detail Relation To Broader Scientific Literature: The authors introduce a new and simple exploration strategy that involves sampling a unit embedding from the vonmises fisher distribution and then running a cheap ann search as the exploration mechanism. This is in contrast to prior work that proposed truncated Boltzmann exploration for similar problems (TB-exp with several references listed in the paper) Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: Simple exploration strategy that's intuitively easy to appreciate and simple to implement. Proof of evidence of deployment with clear benefits recognized in a production setting is impressive. Weaknesses: The justification for why this is a good idea makes a uniformity assumption on the embeddings which doesn't seem very accurate, and more importantly, the proposed reasoning for why this helps does not appear to hold when violating this assumption. It is plausible that the scheme is still a good idea for other reasons, but I am not sure the current theory indicates anything of that sort. Other Comments Or Suggestions: n/a Questions For Authors: Have you considered interpreting the algorithm as accounting for uncertainty in the query embedding? I wonder if that might give an alternate potential justification for why this could be a good idea, especially when the catalogue embeddings are not uniformly distributed on the hyper-sphere. Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **On the interpretation of the algorithm** The interpretation of our algorithm as a way of accounting for the uncertainty in the query embedding is extremely interesting and will be further investigated in subsequent work. Indeed, the operation of sampling $\tilde{V}$ from a directional distribution, such as von Mises-Fisher, around the initial state vector $V$ can be understood as the embedding equivalent of reformulating a query in information retrieval tasks. With our method, the extent to which the query embedding can be reformulated is entirely controlled by the parameter $\kappa$, which can either remain constant across the hypersphere (as in our offline experiments in Section 4.3 and Appendix H) or depend on additional information, such as the local density of embeddings (as in the real-world A/B test detailed in Appendix I). As the need for reformulation generally arises when the intent behind the query is ambiguous, this could explain why our method benefits recommender systems based on vector searches. Since users do not explicitly formulate their needs, the system must infer them from past user actions, leaving plenty of room for uncertainty. To investigate this hypothesis, in future experiments, we will consider calibrating the concentration parameter $\kappa$ of the von Mises-Fisher distribution to an independent estimate of the uncertainty of the current query embedding. The impact of such an approach on the observed reward should provide valuable insights into the relevance of this interpretation. **On the uniform distribution assumption** This aspect was also noted by Reviewer sVYv, and overall, we agree that the assumption of a uniform distribution over the hypersphere is strong and may not hold in practical settings. However, we emphasize that the vMF-exp method itself remains fully applicable regardless of the distribution of action embeddings. Most of its key properties, such as scalability and unrestricted radius, do not rely on the uniformity assumption. Uniformity is used solely to derive the asymptotic equivalence with B-exp and to obtain the analytical approximations presented in Propositions 4.1 to 4.4. Furthermore, the non-trivial proof detailed from Appendices A to D provides a useful foundation for future attempts at deriving a more general result akin to Propositions 4.1 to 4.4 under relaxed assumptions. Interestingly, our experiments on the publicly available GloVe word embedding dataset (Appendix H) suggest that, although these real-world vectors are clearly not uniformly distributed, the analytical approximations from Propositions 4.1 and 4.4 remain accurate in most cases. Moreover, the A/B testing conducted on the music streaming service (Appendix I) further showcases the practical effectiveness of vMF-exp in an industrial-scale, real-world scenario where, again, the action embeddings are not sampled uniformly. Taken together, these results highlight the robustness and practical relevance of vMF-exp beyond the idealized uniform setting of our theoretical study. **On the case of heavily imbalanced samples** It is true that, under a heavily imbalanced distribution of action embeddings, actions associated with larger Voronoi cells may "attract the nearest neighbor search to them." However, we emphasize that the sampling density of the vMF distribution is still modulated by the inner product similarity between the state and action embeddings. In other words, even if an action has a large Voronoi cell, a very low similarity with the current state will significantly reduce its sampling probability. This mitigates the risk of over-sampling irrelevant actions solely due to geometric imbalance. Regarding how much more frequently such an action might be sampled compared to vanilla Boltzmann exploration, the answer is inherently distribution-dependent. The reviewer’s comment highlights an interesting direction for future theoretical investigation into the different behaviors of B-exp and vMF-exp under heavily imbalanced embedding distributions. We will consider discussing this aspect explicitly in the limitations and future work section of the paper. The outline of this future research agenda should be as follows: - Assess the extent to which real-world distributions exhibit outliers with large Voronoi cells. - Evaluate the impact of over-sampling isolated vectors (and under-sampling potentially redundant actions) on the training of models in a batch-learning setting. - Consider alternative distributions from directional statistics that allow for anisotropic sampling, such as the Fisher-Bingham distribution, to balance the oversampling of isolated points. More details about this research agenda can be found in our response to Reviewer sVYv.
Summary: Summary: The authors propose to improve exploration for very large actions spaces (e.g., millions of samples). This work attempts to overcome the main limitation of Boltzmann exploration for high dimensional actions spaces which requires calculating cosine similarities between the reference sample and all other samples. The proposed exploration method using von Mises-Fisher sampling improves upon this by setting probabilities for exploration depending on whether a candidate sample is near the current sample according to a voronoi cell in a hypersphere. Thus the calculation for the exploration probabilities can be kept constant with regards to the number of actions. ### Update after rebuttal I appreciate the authors responses. I think this is a very well written paper with clear desiderata that are needed for sampling in very large action spaces. Even though the experimental section is limited in its thoroughness, I think the authors rather convincingly show that their method is effective. I will keep my score of accept. Claims And Evidence: The paper is extremely well written. The authors propose a set of conditions that need to be met for efficient exploration in high-dimensional spaces and explain how their method fulfills all these conditions in comparison to other methods. Furthermore, an exhaustive analysis is performed of the exploration behaviour when using von Mises-Fisher sampling. Methods And Evaluation Criteria: The authors clearly identify issues with current Boltzmann exploration, which is its inefficiency when dealing with millions of samples. However, the authors go further by proposing a set of desiderate that an exploration algorithm should have for their intended purpose: Scalability, Unrestricted radius, Order preservation. I believe these criteria are sensible for the problem contstraints. The authors scrutinise random, boltzmann and von mises-fisher exploration under these criteria and show that their proposed method fulfills all of these criteria. Theoretical Claims: I did not check every claim in detail, although most justifications follow logically if one is familiar with the discussed methods. In any case, the authors show in Figure 2, that their proposed method holds the properties that were derived for their proposed method. Experimental Designs Or Analyses: I think the authors do a commendable effort on studying the effectiveness of their method and on whether it improves upon Boltzmann exploration. While concrete benchmarks results are not available, given the theoretical guarantees and the synthetic analysis it is likely that the proposed method will perform better. The authors also provide a meta analysis of a field study performed on a large streaming service. Supplementary Material: I checked parts H and I of the appendix. Relation To Broader Scientific Literature: I think this is an interesting problem to address from a technical perspective. However, the problem is quite constrained to the proposed issue of music recommendations. I wonder how well the von Mises-Fisher sampling would work when embeddings are not as "close" in euclidean space. I would like to see the authors address more limitations regarding the type of embeddings that can be used with their proposed methods. Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: - How well would the method work for embeddings composed of discrete features? As I understand it, von Mises-Fisher sampling relies on neighbourhoods between samples to be reasonably close to each other. What happens when features are distant in euclidean space. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **On the possible distribution of embeddings** Regarding your concern, vMF sampling is applicable to a wide range of embedding distributions and is not restricted to the uniform distribution on the sphere, which we assumed in part of the theoretical analysis in Section 4. vMF sampling operates by identifying approximate nearest neighbors based on inner products in a d-dimensional hypersphere. Equivalently, it can be understood as finding nearest neighbors using cosine similarity in a d-dimensional Euclidean space. Unlike Euclidean distance, cosine similarity is inherently bounded between -1 and 1, making our approach robust even when embeddings are far apart in Euclidean terms. To assess the behavior of our method for both low and high inner product similarity values, we provide in Figure 9 (Appendix H) an analysis of vMF sampling on the real-world dataset of GloVe-25 embeddings, considering cases where the action to be sampled has a similarity of -0.9 and +0.9 with the context vector. These plots demonstrate that while vMF exploration behaves differently from Boltzmann exploration, it still satisfies properties P1, P2, and P3 outlined in Section 2, reinforcing its practical applicability. **On the particular case of discrete features** You also raise an interesting question regarding the behavior of vMF sampling when features are discrete. This is particularly relevant in the context of embedding quantization, which reduces memory usage and accelerates similarity computations by decreasing the number of bitwise comparisons. In this scenario, it is again important to highlight that cosine similarities remain bounded between -1 and 1, ensuring that approximate nearest neighbor search remains feasible. A potential challenge in highly quantized embeddings is that the set of possible inner product similarities becomes discrete, potentially leading to ambiguity when multiple points have identical similarities to the context vector. However, this issue arises even in Boltzmann exploration, as it also depends on computing inner product similarities before sampling actions. Moreover, even when embeddings are discrete, sampling from a vMF distribution around a discrete state embedding $V$ produces a new state vector $\tilde{V}$ with continuous features, mitigating the aforementioned issue. **On additional applications beyond music recommendation** It is worth noting that although the submission has recurrently referred to the example of music recommendation, other recommendation scenarios involving very large catalogs can be considered. To illustrate this, we refer to the paper [1] mentioned by Reviewer gBBS. In this paper, which deals with the task of extreme classification, a dataset of Amazon products with 670,000 different labels is considered for evaluation. More generally, extreme classification is an active area of research that involves different sorts of applications, and for which a benchmark can be found at http://manikvarma.org/downloads/XC/XMLRepository.html [1] Lopez et al., Learning from eXtreme Bandit Feedback, 2020
Summary: This paper addresses the challenge of exploration in reinforcement learning (RL) when the action space is extremely large, as in real-world recommendation systems like music streaming platforms. Traditional exploration strategies such as Boltzmann exploration and epsilon-greedy become inefficient or intractable in these settings, especially when millions of actions are involved and many are irrelevant. A common workaround, truncated Boltzmann exploration (TB-exp), limits exploration to a small subset of actions, but this may hinder optimal performance. The authors propose a new method, von Mises-Fisher exploration (vMF-exp), designed for large-scale RL tasks where actions are represented by embedding vectors on a unit hypersphere. vMF-exp samples a direction using a von Mises-Fisher distribution and explores actions in that direction, allowing scalable and informative exploration. The paper provides a theoretical analysis showing that vMF-exp retains key properties of Boltzmann exploration in certain regimes while being computationally efficient. Empirical results, including real-world deployment on a music streaming platform, support the method’s effectiveness. A public Python implementation is also provided. Claims And Evidence: Claims and supported by mathematical statements and experiments. Methods And Evaluation Criteria: They seem to make sense. Theoretical Claims: I did not check the correctness of the proofs. Experimental Designs Or Analyses: I didn't check in detail. Supplementary Material: No. Relation To Broader Scientific Literature: The works builds on the broader research effort of designing scalable exploration strategies in large action spaces. Essential References Not Discussed: NA Other Strengths And Weaknesses: Main weakness: While vMF-exp offers a more scalable alternative to B-exp, its theoretical justification relies on a regime where action embeddings are sampled uniformly at random from the hypersphere—a setting that is unlikely to reflect the structure of real-world datasets. Moreover, vMF-exp tends to assign higher sampling probability to isolated actions (i.e., those with large Voronoi cells), which may not be desirable. For example, in a music recommendation context, this could lead to disproportionately sampling songs from niche genres unrelated to a user’s preferences. This behavior raises concerns about the method’s practical suitability, and as a result, I remain unconvinced that vMF-exp is the right scalable solution for exploration in large action spaces. With that said, experimental results still look promising. Other Comments Or Suggestions: Paper is well written and easy to follow. Questions For Authors: Can you address the weakness that I pointed out. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **On the uniform distribution** The reviewer highlights an important point. We agree that the assumption of a uniform distribution over the hypersphere is strong and may not hold in practical settings. However, we would like to emphasize that the vMF-exp method itself remains fully applicable regardless of the distribution of action embeddings. Most of its key properties, such as scalability and unrestricted radius, do not rely on the uniformity assumption. Uniformity is used solely to derive the asymptotic equivalence with B-exp and to obtain the analytical approximations presented in Propositions 4.1 to 4.4. Interestingly, our experiments on the publicly available GloVe word embedding dataset (Appendix H) suggest that, although these real-world vectors are clearly not uniformly distributed, the analytical approximations from Propositions 4.1 and 4.4 remain accurate in most cases. Moreover, the A/B testing conducted on the music streaming service (Appendix I) further showcases the practical effectiveness of vMF-exp in an industrial-scale, real-world scenario where, again, the action embeddings are not sampled uniformly. Taken together, these results highlight the robustness and practical relevance of vMF-exp beyond the idealized uniform setting of our theoretical study. **On the potential over-sampling of isolated actions with large Voronoi cells** The reviewer raises an interesting question. In practice, a large Voronoi cell simply implies that a given action is more likely to be sampled than an equally undesirable action with a smaller Voronoi cell, but *not necessarily* that it would be sampled disproportionately overall. When Property P3 holds, the sampling probabilities remain aligned with action relevance. That is, undesirable actions, even those with large Voronoi cells, will still receive lower sampling probabilities than more desirable ones. Therefore, the ranking of actions by relevance is preserved, and the method does not favor undesirable actions in an uncontrolled manner. That said, the reviewer's comment opens up interesting avenues for additional research related to exploration in the presence of outliers. We detail below a proposal for a future research agenda tackling the specific problem of outliers management in the context of a the scalable sampling method proposed in this submission: * *Prevalence of isolated points in real-world datasets of embeddings:* the first step would be to determine if real-world embeddings actually exhibit isolated points with larger Voronoi cell overly attracting nearest neighbor search. Theoretical approaches using tools from statistical theory, such as Mardia's multivariate kurtosis, and empirical approaches based on Monte Carlo estimation of the surface of Voronoi cells can both be considered for this task. * *Impact of oversampling isolated actions on model training and reward*: in case that isolated points frequently occur, a follow-up research question would be to determine the impact on observed rewards and model training, which wouldn't necessarily be negative. Indeed, actions embedded close to one another are likely to provide a similar reward, thus carrying redundant information when selected. A method undersampling those actions in favor of isolated ones could thus provide more diverse batches for training, potentially accelerating convergence of bandit and reinforcement learning approaches, all the while introducing diversity in the selected actions which in the case of a recommender system can provide a more interesting experience. * *Correcting for oversampling of isolated points*: while the von Mises-Fisher distribution samples vectors isotropically around its mean direction, other distributions from the directional statistics literature can be considered. One interesting candidate could be the Fisher–Bingham distribution, whose probability density function is given by: $$f(\tilde{V} | {V}, {\Sigma}) = \frac{\exp ( - \frac{1}{2} \tilde{V}^\top {\Sigma} \tilde{V} + \tilde{V}^{\top}\Sigma V ) }{C({\mu}, {\Sigma} )} $$ where $\Sigma$ is a full rank, square concentration matrix, that can be fitted from embeddings in the neighborhood of the context vector $V$ to account for anisotropy, thus favoring denser areas around $V$ containing more embeddings.
Summary: This paper proposes a method called vMF-exp: a method for exploration in tasks with large action spaces. One such task is recommender systems, where there are millions of categories to choose from. The paper discusses 3 important properties in order to have good exploration: 1. scalability to sample actions from a large search space; 2) high radius to have some probability of sampling every action; 3) order preservation where the probability of selection of an action depends on some similarity metric. The paper then discusses the problems with existing methods like $\epsilon$-greedy, Boltzmann exploration, and Truncated Boltzmann exploration~(TB-exp). A method called vMF-Exploration is proposed which satisfies these properties. A/B testing is conducted where the vMF-exp performs similarly to the TB-exp on a music recommendation platform. Claims And Evidence: The paper motivates the importance of efficient exploration for the application of recommender systems. However, the proposed method vMF performs similarly to the known truncated Boltzmann exploration. It is unclear is the proposed method helped with exploration in the action space. It is unclear if the proposed method is helping with exploration in web-scale recommender systems. As discussed in the paper, TB-exp suffers from exploration issues because of the fixed radius while sampling exploratory action (P2), it would be interesting to see what fraction of actions were sampled outside this radius and how often that was a good action. Methods And Evaluation Criteria: The paper does not compare with any open-source datasets used to test the claim over recommender systems. Another concern is that only A/B testing results are reported and there are no analysis on where the gains are coming from. Theoretical Claims: I checked Proposition 4.1-4.3. Experimental Designs Or Analyses: I checked Appendix I where the paper discusses the experiments and results of A/B testing. Supplementary Material: Appendix I and Proof of Proposition 4.1-4.3. Relation To Broader Scientific Literature: Exploration is challenging in web-scale recommendation systems due to large action spaces. Moreover, A/B testing is hard as the random exploratory actions can be irrelevant. Any method that improves this exploration is applicable for retrieval in web-scale recommendation systems. Essential References Not Discussed: NA Other Strengths And Weaknesses: 1. The paper is not well-written. The paper talks about importance of exploration in recommendation systems, but all the results of A/B testing are pushed to the Appendix. The main experiments should be in the main paper. The discussion on properties P1-P3 and the existing methods is verbose and having a simple table will help articulate this better. 2. Another concern is that the A/B testing results are not convincing. The paper would benefit from experiments on a public dataset. 3. How does vMF-exp compare to algorithms for eXtreme Multi-Label Classification (XC) like [1]? [1] Lopez et al., Learning from eXtreme Bandit Feedback, 2020 Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Point 2** We begin with Point 2, which we understand to be the reviewer’s primary concern. This point appears to stem from a misunderstanding, as our paper does, in fact, include extensive experiments on a publicly available real-world dataset. Specifically, Section 5 and Appendix H present an in-depth experimental validation of the scalability and theoretical properties of vMF-exp, using a large-scale dataset of publicly available GloVe word embeddings. These experiments reinforce the conclusions of the paper. In particular, they demonstrate that actions consistently receive a positive sampling probability (Property P2 – unrestricted radius), which, by design, TB-exp fails to achieve. Moreover, the experiments validate the practical relevance of Properties 4.1 to 4.4 derived in our theoretical analysis when applied to real-world data. We have made our source code publicly available to ensure full reproducibility. We hope this clarification will help address the reviewer's reservations regarding the evaluation of the method. *P.S.* We plan to add a figure to the A/B test discussion to clearly illustrate that, in this A/B test setting as well, all actions receive a positive sampling probability under vMF-exp; unlike TB-exp, where probabilities are set to zero beyond the top-500 radius. We hope this additional plot will reinforce the message of the A/B test, which not only demonstrates the practical value of vMF-exp through its successful deployment at scale for music recommendation to millions of users, but also highlights its ability to support broader exploration without compromising performance. **Point 1** While two other reviewers found the paper to be *"extremely well written"* and *"well written and easy to follow,"* we understand the reviewer's comment and would like to clarify the motivation behind the structure and focus of the paper, which may help contextualize our presentation choices. As stated in the paper, the primary objective of this work is to introduce vMF-exp and provide a rigorous mathematical analysis of its exploration behavior over large action sets. In this sense, the paper is positioned primarily as a theoretical contribution, one that could have been presented as a purely theoretical article. The experimental results were intended as a complement to the theoretical findings, offering empirical insights and illustrating the applicability of vMF-exp beyond mathematical results. This motivation guided our decision to place these experiments in the appendix in order to preserve focus in the main text. That said, we appreciate the reviewer's suggestion. We will consider creating a summary table to better articulate the key differences among methods and improve accessibility. Moreover, in a future arXiv version of the article, where space constraints are not an issue, we will consider moving both experiments on GloVe public data and the A/B test into the main body of the paper. **Point 3** The mentioned paper deals with correcting importance sampling estimators of bandit algorithms when the action set is extremely large. To reduce variance, they make the explicit assumption that most actions are irrelevant, assigning them a non-learnable minimal reward and stabilizing the computation of the expected reward for a learnable policy. However, they explicitly state that they do not provide a solution to identify the subset of relevant actions to explore, and instead resort to "a greedy heuristic," namely top-$p$ sampling with $p = 20$, followed by softmax, i.e., TB-exp with $m = 20$. Our exploration method could be combined with their training approach for a potentially stronger algorithm (in our case, the set of relevant actions can vary depending on $x$, which constitutes a meaningful improvement over the fixed-size greedy top-$p$ selection). It is interesting to note that, as a future direction, they mention developing methods to *"further help in incorporating prior knowledge about the reward function"*... which is precisely what vMF-exp sampling is about.
null
null
null
null
null
null
Variance as a Catalyst: Efficient and Transferable Semantic Erasure Adversarial Attack for Customized Diffusion Models
Accept (poster)
Summary: The paper proposes a novel adversarial attack method, leveraging variance manipulation to efficiently and consistently erase identity semantics from images generated by diffusion models, such as Stable Diffusion. The authors introduce two main approaches, Laplace-based and Lagrange Entropy, to address limitations in existing methods like MSE and PID . Their techniques focus on optimizing the variance components of the latent space, enabling more effective and efficient semantic erasure in generated images. Additionally, they demonstrate the transferability of their methods across various diffusion models and against different personalization techniques, showcasing the robustness of their approach. Claims And Evidence: Yes Methods And Evaluation Criteria: The paper proposes two novel loss functions—LA and LE—focused on optimizing the variance in the latent space of diffusion models. The LA loss ensures the gradient is aligned with the variance growth direction, allowing for efficient local optimization. The LE loss integrates entropy and a Lagrange constraint to balance optimization, promoting variance growth in smaller components and preventing slow convergence as variance becomes more uniform. These two losses outperform traditional methods like MSE and PID in both speed and effectiveness. Theoretical Claims: The paper claims that manipulating the variance of the latent codes is the key to erasing identity semantics in generated images. While the authors theoretically justify their approach using gradient flow analysis, the detailed reasons why variance manipulation is superior to other adversarial strategies could be elaborated further. Experimental Designs Or Analyses: The experiments are well-designed, covering a range of datasets and comparing the proposed methods to a variety of baselines. However, the study could benefit from a broader set of experiments that test the methods on more diverse datasets, including non-human subjects or artistic images, to assess the method’s generalizability. Additionally, more discussion is needed on the visual quality of generated images and how the methods balance identity erasure with image naturalness. Supplementary Material: No Relation To Broader Scientific Literature: The paper is well-positioned in the context of adversarial attacks and privacy protection for generative models. Essential References Not Discussed: No Other Strengths And Weaknesses: # Strength - The idea of manipulating variance components in latent space to achieve semantic erasure is an innovative approach. It provides an alternative to traditional adversarial techniques that focus on perturbing pixels or the mean of the latent space. This allows for more targeted attacks on identity information. - The methods proposed (LA and LE) are computationally efficient, using significantly less memory and processing time compared to existing methods like MetaCloak and SimAC. For example, LE requires only 8 seconds and 4.5 GB of GPU memory, which is impressive considering the task at hand. Additionally, the high transferability of the methods across models like SD1.5, SD2.1, SDXL, and others is a valuable contribution to the fieldxperimental Design and Evaluation # Weakness - The mathematical models in Section 3 offer a detailed explanation of the loss functions (LA and LE), but the implications of these formulations on the stability of the optimization process, especially under high-dimensional settings, could be better discussed . While the authors addrestical challenges (e.g., Jacobian computation), more clarity on potential issues like gradient vanishing/explosion in high-dimensional latent spaces could enhance the robustness of their claims . - The authors present a variety of experimental settings, including comparisons with multiple state-of-the-art methods. However, the experiments could benefit from a deeper analysis of the real-world implications of their method. For example, further discussion is needed on the trade-offs between the extent of identity erasure and the visual quality of generated images. Although the images achieve high semantic erasure, they may not always meet the quality standards expected in all applications. - The experimental evaluation also primarily focuses on face images (CelebA-HQ, VGGFace2). It would be beneficial to extend these tests to a broader range of image types and tasks (e.g., artistic generation or non-human subjects) to test the generalizability of the proposed method. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **Q1: Gradient Explosion and Unstable Oscillations** Thank you for raising this important point. We acknowledge that rapidly increasing latent variance can lead to gradient explosion and numerical instability. As discussed in **Appendix D.1 and Fig. 4**, the gradient of the LE loss, $\tfrac{\partial \mathcal{L}_{LE}}{\partial \delta}$, explodes around 30 optimization steps, while the LA gradient begins oscillating around step 50. These issues are caused by the nonlinearity of the model and the Jacobian terms, which become unstable as the latent distribution expands and certain activations saturate in high-dimensional space. Interestingly, **this instability is not entirely detrimental**. In fact, it reveals a clear and interpretable progression in the generated images over the course of optimization: - **At step 20**, the variance increases slightly. Most latent dimensions remain compact, and only a few begin to expand. The generated image still retains a recognizable facial structure with mild noise and texture distortions. - **Around step 25**, variance expansion becomes more pronounced. The image quality starts to degrade with strong noise artifacts appearing across the face. However, some identity-related features are still visible. At this point, the visual result looks similar to outputs from methods like SimAC and PID—unnatural and distorted, but not fully erased. - **Between steps 30 and 50**, the variance increases rapidly across many dimensions, causing the latent distribution to flatten and spread widely. This pushes the sampled latent codes far from the dense semantic region covered by the VAE’s training distribution. Since these codes fall outside the decoder’s learned prior, they are interpreted as random noise, leading to outputs with no recognizable facial structure or semantic content. - **At step 75**, variance grows at different rates across dimensions. Some dimensions saturate while others continue expanding, leading to imbalanced latent structures. The outputs exhibit chaotic combinations of textures and colors, forming a new kind of “random state” in appearance. - **Beyond step 100**, further gradient explosion or sign flipping may occur. In some cases, the decoder maps these extreme latent codes to scene-like artifacts (e.g., indoor layouts). These are not restored content but hallucinated patterns caused by nonlinear decoding from noise. This behavior highlights a key advantage of our method: **Controllability**. Users can stop the optimization early (e.g., at step 25–30) to avoid instability while still achieving strong identity removal with reasonable image quality. Alternatively, further optimization gradually strengthens the erasure effect, progressing from facial distortion to complete identity removal and, eventually, to synthetic scene-like hallucinations. This allows users to tailor the erasure strength to their specific needs, offering a practical balance between robustness and flexibility. ### **Q2. Trade-off Between Identity Erasure and Visual Quality** Thank you for the thoughtful suggestion. We address the trade-off between identity erasure and visual quality in **Table 7 of the main paper**, where we compare our method under different perturbation budgets. Notably, our approach achieves complete identity removal (ISM = 0) even with a small perturbation of 8/255, while baseline methods require significantly larger perturbations (e.g., 0.05) to achieve similar or weaker results. This demonstrates that our method is not only effective but also visually practical. In addition, a key strength of our approach is its controllability. Users can flexibly adjust the perturbation size and the number of optimization steps to balance protection strength and image quality. For instance, stopping at around 30 steps with 8/255 perturbation yields strong identity erasure while maintaining acceptable visual quality, which we believe makes the method more adaptable for real-world applications. ### **Q3: More Experiments on other datasets** Thank you for this valuable suggestion. **As shown in Fig. 13 at the end of the appendix**, we have extended our evaluation to artistic-style images. Our method remains highly effective in this setting, producing pure noise outputs and demonstrating strong semantic removal. These results indicate that our approach is not limited to facial datasets and can generalize well to a broader range of image types. We sincerely appreciate your encouragement to further validate the robustness of our method.
Summary: This paper protects images from malicious editing by attacking diffusion models. The authors design two loss functions, LA and LE, to attack the image variance after VAE encoding, demonstrating stronger attack effectiveness compared to other methods. ## update after rebuttal Authors' rebuttal have solved my concerns, and I will adjust my rating based on other reviewers' scores. Claims And Evidence: Although the authors emphasize the importance of variance, there is no experiment demonstrating whether attacking the mean also has the same attack effect. Methods And Evaluation Criteria: What does “clean” in Table 8 mean? Does it refer to the performance without any defense method applied, or the performance without any attack? Both should be included in the table to better illustrate the impact of the defense methods on the attacks. Theoretical Claims: In my opinion, there are no major issues with the theoretical analysis, but I am not an expert in this area, so I suggest considering the opinions of other reviewers as well. Experimental Designs Or Analyses: The relationship between the two proposed losses is not clearly illustrated, and the experiments do not explicitly clarify which loss is more advantageous for specific scenarios. Supplementary Material: Yes, I have reviewed all. Relation To Broader Scientific Literature: The proposed method is an improvement based on previous VAE methods, such as PID. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths : 1. The proposed methods show better attack performance compared to other approaches. 2. The design of the loss functions has a certain level of interpretability. Weakness see the above Other Comments Or Suggestions: No additional comments Questions For Authors: No additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Experiments of Mean Attack** **Table 1: Effectiveness of Attacking Mean vs. Variance** | Method| ISM ↓ | FDFR ↑ | Brisque ↑| LPIPS ↑| | -------------------- | ----- | ------ | ----------- | --------- | | LA_Mean_30step| 0.276 | 0.598 | 29.801| 0.855| | LA_Mean_50step| 0.234 | 0.703 | 31.714| 0.861| | LA_Mean_100step| 0.204 | 0.781 | 33.219| 0.871| | LA_Mix_100step| 0.231 | 0.772| 32.594| 0.872| | LA_Var_30step (ours) | **0** | **1** | **155.845** | **0.959** | Thank you for your thoughtful comment. We address this concern both theoretically (see our response to Reviewer WD7W, Q1) and empirically in **Table 1**. The results clearly show that attacking the mean alone yields significantly weaker erasure performance, even with more optimization steps. For example, **LA_Mean_100step** achieves ISM = 0.204, whereas our **LA_Var_30step** achieves **ISM = 0** and **FDFR = 1** with much fewer steps. Furthermore, combining mean and variance attacks (**LA_Mix_100step**) limits the growth of variance and results in less effective erasure than attacking variance alone. These consistent trends demonstrate that variance-based attacks are more effective and efficient, both in theory and in practice. **Q2: Ambiguity about Table 8** **Table 2: Robustness of different methods against JPEG Compression and GrIDPure** | Attack Method | No Image Preprocessing | No Image Preprocessing | No Image Preprocessing | No Image Preprocessing | JPEG Compression | JPEG Compression | JPEG Compression | JPEG Compression | GrIDPure| GrIDPure| GrIDPure | GrIDPure | | ------------- | ---------------------- | ---------------------- | :--------------------: | ---------------------- | ---------------- | ---------------- | ---------------- | ---------------- | --------- | --------- | ---------- | --------- | || ISM ↓| FDFR ↑|Brisque ↑| LPIPS ↑| ISM ↓| FDFR ↑| Brisque ↑| LPIPS ↑| ISM ↓| FDFR ↑| Brisque ↑ | LPIPS ↑| | Clean Image | 0.608| 0.041|17.896| 0.662| -| -| -| -| -| -| -| -| | AdvDM| 0.424| 0.307|24.215| 0.798| 0.659| 0.031| 30.269| 0.771| 0.601| 0.031| 20.609| 0.705| | ASPL| 0.406| 0.287|24.419| 0.805| 0.668| 0.042| 31.592| 0.769| 0.574| 0.022| 20.061| 0.707| | Mist| 0.249| 0.169|13.981| 0.707| 0.629| 0.014| 33.686| 0.807| 0.541| 0.057| 20.149| 0.773| | MetaCloak| 0.593| 0.051|36.325| 0.712| 0.592| 0.075| 53.622| 0.813| 0.593| 0.047| 27.871| 0.745| | SimAC| 0.253| 0.865|51.059| 0.823| 0.632| 0.032| 32.955| 0.759| 0.548| 0.091| 24.967| 0.699| | DisDiff| 0.605| 0.116|29.361| 0.695| 0.672| 0.035| 29.217| 0.753| 0.618| 0.032| 19.651| 0.693| | SDS-| 0.655| 0.005|38.519| 0.743| 0.684| 0.022| 33.053| 0.818| 0.624| 0.047| 23.174| 0.729| | PID| 0.069| 0.938|85.533| 0.899| 0.473| 0.255| 40.435| 0.804| 0.473| 0.189| 43.878| **0.773** | | **LE (ours)** | **0**| **1**|**155.804**| **0.947**| **0.451**| **0.358**| **62.541**| **0.821**| **0.456** | **0.231** | **49.071** | 0.769 | | **LA (ours)** | **0**| **1**|**155.845**| **0.959**| **0.447**| **0.342**| **63.572**| 0.791| **0.449** | **0.242** | **49.729** | 0.761 | Thank you for the question, and we apologize for the confusion. In Table 8, “Clean” refers to the original image without adversarial perturbation. To better illustrate the robustness of our method, we provide **Table 2**, which evaluates robustness performance under both **traditional image preprocessing (JPEG Compression)** and **diffusion-based purification (GrIDPure)**. In this table: - “Clean Image” denotes the unperturbed original image - “No Image Preprocessing” indicates adversarially protected images before any additional post-processing. As shown, our methods consistently achieve stronger robustness than most baselines and deliver performance comparable to or better than PID under both JPEG compression and GrIDPure purification. This demonstrates the effectiveness and reliability of our approaches under common real-world applications. **Q3:LA vs LE, which is better?** Thank you for the feedback. Both LA and LE losses are designed to align perturbations with the direction of latent variance growth, and they share the same core objective. While their formulations differ slightly, they are conceptually equivalent in guiding effective attacks. As shown in **Tables 1, 2, and 3 of the main paper**, both losses consistently outperform existing baselines across different evaluation settings. Table 4 further demonstrates that our method improves attack efficiency by nearly 30× compared to the previous SOTA method PID. In addition, Table 7 shows that our loss functions achieve complete identity erasure (e.g., ISM = 0, Brisque = 122.266) even with a small perturbation budget of 8/255, while baselines require much larger perturbations (e.g., 0.05) to achieve inferior results. These results suggest that both losses are effective and efficient, and users can choose either based on implementation preference or training stability.
Summary: This paper proposes two novel loss functions, i.e., Laplace Loss (LA) and Lagrange Entropy Loss (LE), which used for adversarial attacks aimed at disrupting Latent Diffusion Models (LDMs). The key insight is identifying the variance of the VAE latent code as critical for effectively erasing identity semantics in generated images. Experimental results show that the proposed methods achieve good performance compared to existing techniques, effectively producing pure-noise images with completely erased identity semantics. Additionally, these methods demonstrate better transferability and reduced computational requirements. Claims And Evidence: The claims made regarding the efficiency and efficacy of the proposed loss functions (LA and LE) are supported by experimental evidence. State-of-the-art performance metrics (ISM, FDFR, Brisque, LPIPS) demonstrate strong identity erasure capabilities compared to baseline solutions. However, some minor issues remain: [Issue 1] Clarification of the attack scenario is required. It is suggested to clearly state the training and testing process since not all the readers understand the process. For example, the customization of DM usually require the binding of a specific token to the concept. [Issue 2] Evaluation of the protected image quality beyond distortion metrics could strengthen evidence supporting the practical usability of the proposed methods. Methods And Evaluation Criteria: The methods proposed are appropriate for the stated goal of semantic erasure in generative diffusion models. However, the choice of metrics differs from related works without explicit justification. Theoretical Claims: The paper provides thorough theoretical insights into the proposed loss functions, and the derivations appear sound upon careful inspection. Experimental Designs Or Analyses: The experimental designs are sound, clearly comparing proposed methods to multiple baselines. However, the design is limited by testing only one type of prompt. It is suggested to test different kinds of prompts, as well as the prompts that will be used in practical scenarios. Supplementary Material: I have reviewed the Appendices A, B, and C. Relation To Broader Scientific Literature: This paper posits its contributions within the context of existing literature, highlighting improvements over current methods in adversarial attacks for LDMs. It effectively identifies gaps in previous approaches (inefficient optimization, misalignment of gradient signs) and clearly differentiates its contributions. Essential References Not Discussed: The related works section appears comprehensive. Other Strengths And Weaknesses: **Strengths**: S1. Clear Theoretical Justification: The proposed novel loss functions (Laplace-based Loss and Lagrange Entropy-based Loss) are deeply grounded in theoretical analysis, providing a robust rationale for variance manipulation as a mechanism for semantic erasure. S2. Computational Efficiency: The methods achieve substantial speed improvements and require significantly fewer computational resources, making them practical even in resource-constrained settings. S3. Empirical Performance: The experiments demonstrate state-of-the-art performance across several datasets and diffusion model architectures, including advanced versions such as SD3.5 and FLUX.1-dev, validating the effectiveness and generalizability of the approach. (4) High Transferability: Unlike prior methods, the proposed techniques demonstrate consistently high transferability across diverse diffusion model architectures, increasing their utility in various real-world scenarios. Weaknesses: W1. Limited Variety in Prompt Selection: The experimental evaluations rely heavily on a single prompt structure (e.g., "a photo of a sks person"), restricting the assessment of the generalizability of the attack. Expanding tests to diverse prompt types would better demonstrate the broader applicability of the proposed approach, for example, a photo of a sks person dancing/eating on the train. W2. Introduction of New Metrics Without Comprehensive Justification: The authors introduce new evaluation metrics, such as Identity Score Matching (ISM) and Face Detection Failure Rate (FDFR), without thoroughly justifying why established metrics (e.g., standard face recognition scores or perceptual quality measures) were insufficient. This lack of detailed explanation may limit the perceived validity and broader acceptance of the evaluation criteria. W3. The results of adversarial training are encouraged. Other Comments Or Suggestions: LINE 196: Replace "by another neural network" with "by another neural network $g$". Table 6: Alignment issues with check and cross symbols should be addressed for clarity. Questions For Authors: Q1. Can you provide a more explicit clarification regarding why only a single type of prompt was used, and discuss potential impacts on the generalization and robustness of your approach? Q2. Could you elaborate on the rationale for adopting ISM, FDFR, and LPIPS as evaluation metrics over previously established metrics like FDS, FID, and IQS? Q3. What if we do the adversarial training on VAE? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **Q1: Attack and Defense Scenario** Thank you for raising this point. Our method is designed for a practical adversarial setting involving a victim (User A) and an attacker (User B): - **Defense Phase**: User A wishes to share photos online but wants to prevent misuse by personalization techniques like DreamBooth or LoRA. Before uploading, User A applies our protection method to the images, which introduces subtle perturbations on photos. - **Attack Phase**: User B collects these protected images and tries to train a personalized diffusion model to generate fake content involving User A. However, since the images have been preemptively protected, the model fails to capture meaningful identity information and generates pure noise output. ### **Q2: Trade-off Between Identity Erasure and Visual Quality** Thank you for the thoughtful comment. We address this trade-off in Table 7 of the main paper, where we compare performance under different perturbation budgets. Our method achieves complete identity erasure (e.g., ISM = 0) even with a small perturbation of 8/255, outperforming baselines that require much larger distortions (e.g., 0.05) to reach comparable effectiveness. This indicates that our method is both effective and visually practical. In real-world deployment, users can easily balance protection strength and visual quality by adjusting the perturbation budget and number of optimization steps. For instance, limiting the perturbation to 8/255 and stopping at 30 steps provides strong identity removal while preserving reasonable image quality. ### **Q3: Different Prompt** Thank you for raising this important concern. As shown in **Appendix C.1 (Table 9)** and **Appendix E (Figure 6)**, we evaluate our method using an alternative prompt (“**a dslr portrait of sks person**”) in a DreamBooth personalization setting. The results show that our approach remains highly effective under prompt mismatch, consistently producing pure-noise outputs. This confirms that our method is **prompt-agnostic**. Unlike prior works such as Anti-DreamBooth, AdvDM, SimAC, and MetaCloak, which require prompt-specific gradient information during optimization, our method does not bind perturbations to any particular prompt. Instead, it directly targets the VAE encoder, which operates before the prompt is introduced. Since prompts influence only the denoising stage (e.g., U-Net or MM-DiT), they do not affect our gradient path, ensuring independence from specific prompt conditions. Moreover, our method demonstrates strong performance against **LoRA-based personalization (Appendix E, Figure 7)**, where arbitrary prompts can be used during generation, as well as in **ControlNet-based image editing (Appendix E, Figure 28)**, where we apply a new prompt (“**a man**”) to modify the gender of the reference image. These results further validate the prompt-agnostic nature of our approach and highlight its robustness and practicality in diverse real-world scenarios. ### **Q4: Metric** Thank you for the question. Our choice of ISM, FDFR, and BRISQUE, aligns with Anti-DreamBooth, a seminal work in identity erasure. These metrics provide intuitive and reliable evaluations for our task: - FDFR (Face Detection Failure Rate) measures whether a face is undetectable and is mathematically equivalent to $1-FDS$. We report FDFR instead of FDS because it more intuitively reflects successful cases of identity erasure in privacy protection. - ISM evaluates identity similarity between the original and generated images using a face recognition model, directly reflecting whether identity semantics remain. - LPIPS is a perceptual metric to estimate visual degradation. Compared to traditional metrics such as FID or IQS, LPIPS better correlates with human perception, especially in assessing localized texture changes or structural distortions. ### **Q5: Adversarial Training on VAE** Adversarial training on the VAE may theoretically enhance robustness, but it faces two key challenges: **1. High Cost and Training Difficulty** Adversarial training is computationally expensive and time-consuming. This is especially problematic for VAEs, which are harder to train than classifiers due to issues like posterior collapse and poor convergence with limited or adversarial data. For typical users with limited resources, this makes adversarial training inefficient and impractical. **2. Stronger Attacks via Proxy Optimization** As shown in ASPL, alternating optimization using the VAE as a proxy can significantly boost attack strength. This increases the difficulty of robust training and further destabilizes the learning process. These two issues suggest that while adversarial training is a theoretically viable defense, it is both computationally costly and technically nontrivial in practice, especially in low-resource environments.
Summary: The paper introduces LA and LE loss functions to enhance semantic erasure in customized diffusion models, addressing privacy concerns by completely removing identity-related features. It identifies variance in VAE latent codes as key to image distortion and uses optimized variance expansion for effective erasure. Experiments on CelebA-HQ and VGGFace2 show state-of-the-art identity removal, 30x faster optimization, and strong transferability across diffusion models. The method is computationally efficient. Claims And Evidence: Yes Methods And Evaluation Criteria: The proposed LA and LE loss functions effectively address the semantic erasure problem in diffusion models by leveraging variance expansion in VAE latent codes. The methodology is well-grounded in theoretical analysis and overcomes limitations of prior approaches. The evaluation criteria are appropriate, using CelebA-HQ and VGGFace2 datasets, and metrics such as Identity Score Matching (ISM), Face Detection Failure Rate (FDFR), Brisque, and LPIPS, which provide a comprehensive assessment of identity removal effectiveness. Theoretical Claims: Please refer previous section. Experimental Designs Or Analyses: Yes, I have reviewed the soundness and validity of the experimental designs and analyses. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper builds on prior work in adversarial attacks on diffusion models and semantic erasure techniques, particularly targeting identity removal in personalized generative models. Previous methods, such as PID and Mist, attempted to erase identity semantics but were limited by slow convergence, poor transferability, and high computational costs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. Existing methods may not be optimal, but they seem sufficient to make people doubt the authenticity of a generated image. For instance, in Figure 3, results from SimAC and PID are not as effective as those from your proposed method, but they still make the image look unnatural enough for a viewer to recognize it as manipulated. Could you elaborate on your argument regarding why stronger erasure is necessary beyond this level of distortion? 2. Given that your proposed methods rapidly expand variance, is there a potential risk of gradient instability or numerical issues during optimization? 3. Your method demonstrates efficiency in terms of GPU usage and runtime. Could you comment on how this computational efficiency scales to larger images or video data, given the rapidly increasing usage of diffusion models in these domains? 4. While your method shows strong transferability across different model architectures, could you discuss any theoretical insights explaining why variance-based attacks generalize so effectively across diverse diffusion models? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Advantages of Variance-based Attack and Better Transferability** **1.Model Architecture** Earlier diffusion models (e.g., SD1.5, SD2.1) use U-Net backbones, enabling effective attacks based on U-Net gradients or cross-attention. However, newer models like SD3.5 and FLUX.1 adopt Transformer-based backbones such as MM-DiT, which differ greatly in gradient behaviors. As a result, prior gradient-based attacks often fail. Despite this, all Latent Diffusion Models share a common pipeline: a VAE encodes the input image to a latent space, followed by diffusion. Our method perturbs the input image to corrupt the VAE output, an essential bottleneck, ensuring transferability across different architectures. **2.Attack VAE Mean $\mu$** VAE applies the reparameterization trick to make sampling differentiable: $$ z=\mu(x)+\sqrt{\sigma^2(x)}\odot \epsilon,\epsilon \sim \mathcal{N}(0,I). $$ Perturbing the mean yields: $$ z'=\mu(x)+\delta_\mu+\sqrt{\sigma^2(x)}\odot\epsilon, \epsilon \sim \mathcal{N}(0,I). $$ The corresponding probability density function (PDF) becomes: $$ p(z')=\frac{1}{\sqrt{2\pi \sigma^2(x)}}exp(-\frac{(z'-(\mu(x)+\delta_\mu))^2}{2\sigma^2(x)}). $$ Attacking the mean shifts the center of the latent distribution without altering the shape of its PDF. Since the variance remains unchanged, the distribution stays compact. Due to the translation-invariance of convolutional backbones, the semantic structure is partly preserved. Visually, this effect is mostly limited to local texture distortion, appearing as noise artifacts rather than semantic erasure. **3.Attack VAE Variance $\sigma^2$:** Perturbing the variance gives: $$ z=\mu(x)+\sqrt{\sigma^2(x)+\delta_{\sigma^2}}\odot \epsilon,\epsilon \sim \mathcal{N}(0,I) $$ The PDF becomes: $$ p(z')=\frac{1}{\sqrt{2\pi(\sigma^2(x)+\delta_{\sigma^2}})}exp(-\frac{(z'-\mu(x)^2)}{2(\sigma^2(x)+\delta_\sigma^2)}). $$ Here, **$\mu(x)$ represents the mean of the original latent distribution**, which corresponds to the dense, semantically meaningful region learned during training. As variance grows large, the 1D Gaussian distribution of each latent dimension flattens into a wide and low curve that is nearly parallel to the x-axis. This flattening means that the probability density near the original mean $\mu (x)$ drops quickly, while the overall sampling interval expands considerably. As a result, the latent samples are no longer concentrated near the dense semantic center of the distribution but become spread out over a vast latent space. From a high-dimensional perspective, this effect is amplified. Given a $d$-dimensional latent space, the expected squared distance between the latent samples $z'$ and mean $\mu (x)$ is: $$ \mathbb{E}[\|| z' - \mu(x) \||^2] = \sum_{i=1}^d \left( \sigma_i^2(x) + \delta_{\sigma^2,i} \right) $$ As variance grows, latent samples $z'$ are pushed away from the high-density region centered around $\mu(x)$. These samples fall outside the data manifold and become unrecognizable to the decoder's prior knowledge. Consequently, the decoder fails to map them to any meaningful structure and instead produces pure noise images. **Q2: Why Stronger Erasure is Necessary** Existing methods add noise artifacts that make the image appear unnatural, but still preserve recognizable facial structure. Such noise artifacts added on preserved facial structures often induce a sense of visual eeriness and discomfort. Moreover, this kind of incomplete protection can also be seen as a form of visual uglification of the victim’s appearance, which could potentially be exploited by malicious users. In contrast, our method aims for complete semantic erasure, providing stronger privacy protection. **Q3: Higher Resolution and Video tasks** Thank you for the insightful question. We conducted tests with higher-resolution inputs (e.g., 1024×1024). When preserving the adversarial example at full 1024×1024 resolution, the runtime increases from approximately 8 seconds to 19 seconds. This is primarily due to the increased computational cost from larger spatial dimensions in both the VAE and the diffusion backbone, which affects all latent-space or diffusion-based attacks. Nevertheless, our method remains highly efficient and consistently outperforms existing approaches in both runtime and GPU usage, even at higher resolutions. For video data, our approach can be extended to video generation models using 3D-VAEs or spatiotemporal latent spaces. In this context, variance-based perturbations may disrupt temporal consistency or causal dependencies across frames, offering a promising direction for future adversarial research in video diffusion models. **Q4: Gradient Explosion and Unstable Oscillations** We appreciate your attention to this important point. Due to character limit, we provide a detailed response to a similar question raised by **Reviewer vD1g Q1**. We kindly invite you to refer to that reply for further clarification.
null
null
null
null
null
null
EKM: An Exact, Polynomial-Time Divide-and-Conquer Algorithm for the K-Medoids Problem
Reject
Summary: The submission provides a novel algorithm for the K-medoids problem. Claims And Evidence: The technical claims are supported by evidence, with the exception of the fact that the problem definition is not presented clearly. Methods And Evaluation Criteria: The methods and evaluation criteria are correct, but very basic. Theoretical Claims: See strengths and weaknesses. Experimental Designs Or Analyses: See strengths and weaknesses. Supplementary Material: The supplementary material provides details for the code; the correctness of the code is however not a determining factor given the other issues with the submission. Relation To Broader Scientific Literature: I did not identify any problems in this aspect. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: While the submission describes an exact algorithm for the K-medoids problem, the problem in question is not formally defined. The closest thing to a definition can be found in Section 2.1, but even that is far from a sufficient formalization of the studied problem: among others, it contains ambiguities concerning the objective function. At the bottom of page 2, the article claims that they make no assumptions about the distance function, but this certainly cannot be correct (for instance, if the distance function is not efficiently computable then the algorithm cannot work). My biggest concern is the actual contribution of the paper. The main contribution is an O(N^k) algorithm for the problem which can be obtained by brute-force enumeration of all choices of subsets (of size at most k) of the N input data points. Suffice to say, this is entirely trivial; naturally one can use non-trivial implementation tricks to generate these subsets in a more efficient way, but the overview of the contributions does not sufficiently discuss such improvements. The fact that the proposed implementation is more efficient than a naive brute-force approach is demonstrated by some basic experiments, but that on its own is not surprising. Overall, I am afraid that none of the contributions seem substantial enough to warrant publication in the proceedings of ICML. ## Update after rebuttal I maintain my assessment and score. Other Comments Or Suggestions: N/A. Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: > 1. Strengths And Weaknesses, "The closest thing to a ... then the algorithm cannot work" This is a very insightful suggestion, our algorithm will be true for any non-negative objective function that can be calculated in the form of the definition after Equation (1), line 103, this ensures the fusion condition in Equation (8) holds。 > 1. Strengths And Weaknesses, "The main contribution is an O(N^k) algorit...but that on its own is not surprising."***" We disagree that one can use "nontrivial implementation" tricks to generate subsets in a more efficient way. To the best of our knowledge, there exists no array-based "non-trivial" generator for subsets generation. Moreover, our generator can process data in a divide-and-conquer way. The most common approaches for subsets generation used in CS communities, as reported in Knuth's The Art of Computer Programming which has now been cited more than 17,900 times, are all list-based methods or one-by-one enumeration. As for why our generator differs from these, see response to Reviewer gKhc, point 1. Our claim is highlighted by the empirical experiments we provide; we compare the classical subsets generator in Python that is used most commonly in the ML community---the *itertools* library and show a superior performance. Regarding the reviewer's concerns on O(N^K) complexity, we explicitly discuss this issue at the beginning of Section 4, from lines 355-374. To clarify, we agree that our algorithm shares the same complexity with the exhaustive search strategy, but the point is that, under the same condition, the state-of-the-art BnB algorithm exhibits exponential complexity, and our algorithm is more efficient than their algorithm in almost all datasets we've tested and produces an exact solution at the same time. Moreover, K in the exponent is unlikely to be eliminated unless P=NP and the complexity of the problem increases as K increases, the study of BnB algorithms rarely reported this limitation, and indeed, their algorithm will have an increased runtime if K increases, as I showed in Point 3 to Reviewer pwNf.
Summary: The k-medoid problem is to find a subset C of k points from a set X of n points in R^d such that the sum of distances of every point in X to its closest point in C is minimised. The exhaustive search algorithm for this problem goes over all possible size-k subsets of X. This has a running time of $O(|X|^{k+1})$. This paper suggests a recursive way to generate all possible subsets of X, and hence, the algorithm's running time matches that of the exhaustive search algorithm. Experiments are conducted on real datasets, and results are compared with the previous Branch-and-bound methods that give approximate solutions. Claims And Evidence: Yes, the claims are supported by evidence. Methods And Evaluation Criteria: The evaluation criteria for experiments can be improved. Please see the comments in the "strength and weakness" section. Theoretical Claims: I checked the algorithm's high-level idea. There are no specific theorems given in the paper. Experimental Designs Or Analyses: I did not find any serious issues. Supplementary Material: No, I did not review the supplementary material. Relation To Broader Scientific Literature: The paper suggests an exhaustive search algorithm for a clustering problem. The broader impact is minimal. Essential References Not Discussed: No. The paper does not crucially use techniques from previous works. Other Strengths And Weaknesses: Strengths: - k-medoid is an important problem in unsupervised learning. Weaknesses: - An exhaustive search algorithm for the problem is of limited interest. - There is literature on solving the problem approximately. Giving motivation (e.g., practical scenarios) for why solving approximately does not suffice and an exact solution is required will help appreciate the paper better. - If there is some advantage of generating all possible subsets in the recursive manner given in the paper over other methods, then this should be shown clearly, giving the list of other methods used and discussing their advantages/disadvantages. This may perhaps improve the exposition. Even the experimental comparisons are with approximate methods and not other exact methods. Other Comments Or Suggestions: ## update after rebuttal: The weaknesses still outweigh the strengths after the rebuttal. I will keep my original score. Questions For Authors: - Some important questions have been mentioned in the strengths and weaknesses section. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: > 1. Theoretical Claims, "I checked the algorithm's high-level idea. There are no specific theorems given in the paper." We do not think the lack of existence of theorems implies that our theoretical claim is invalid. We did this simply for the reason of readability. The correctness of our recursion is justified by the fusion condition in Equation (8), which is equivalent to an inductive proof, but in a more elegant manner. This superior elegance of the proof should be considered as an advantage not a reason to be rejected. > 2. Weaknesses 1, "An exhaustive search algorithm for the problem is of limited interest." > 3. Relation To Broader Scientific Literature, "The paper suggests an exhaustive search algorithm for a clustering problem. The broader impact is minimal." We disagree the broader impact is minimal. There are two main contributions in this paper. 1. We show that the state-of-the-art BnB algorithm has exponential complexity in the worst case even for fixed K. If a simple, naive exhaustive algorithm is better than the existing approach, it is even more crucial to publish these findings. At the very least, it is a critical benchmark for future research on this important topic. 2. Our discussion of an array-based K-combination generator is novel, whereas previous generators are either based on list or one-by-one enumeration, which is far less efficient compared with the one we propose here. This generic generator can be used in the study of many other problems, such as the frequent item set mining problem in data mining, cell enumeration problem in combinatorial geometry. > 4. Weaknesses 2, "There is literature .." As we noted on pages 17-21, the right panel. In high-stake applications, the approximate solution will be unacceptable or carry significant costs. For instance, if we want to build several power stations such that the sum-of-distance to each house is minimal, then little error will be unacceptable because a power station cannot be constructed and deconstructed easily, and small error will cause significant costs. > 5. Weaknesses 3, "If there is some advantage.." This is an insightful suggestion; we omitted the discussion simply because of the space limit, and we needed to choose the most obvious approach to illustrating our claim. The advantages of our generator and others are discussed in Point 1 in the response to Reviewer gKhc. We will clarify it when we have more space. Here is a table of comparison | Combination generators | Efficiency | Parallelizability | Recursive | |----------------------------------- |------------ |---------------- |-----------| | Our generator | High | High | Yes | | Lexicographical generator | Low | Hight | No | | Combinatorial Gray code generators | Medium | Medium | Yes | The recursive property is very important for future speed-up. So lexicographical generator leaves no room for future speed-up.
Summary: The paper presents EKM, a divide-and-conquer algorithm designed to solve the k-medoids problem exactly. Their algorithm guarantees globally optimal solutions in the worst-case $O(N^{k+1})$ time complexity. They compare EKM against approximate algorithms and a state-of-the-art branch-and-bound (BnB) algorithm, showing that their method is faster while also guaranteeing correctness. Claims And Evidence: No. The EKM algorithm proposed in the paper contains two parts: constructing an efficient configuration generator in a divide and conquer manner and a shortcut fusion acceleration. They claim that their method is faster than the enumeration in practice. However, it is not clear to me what the main difference is between EKM and the enumeration and why their algorithm is more efficient. From the description in Sections 2.3 and 2.4, it seems the configuration in EKM is a specific way to enumerate solutions. And this also appears on the running time analysis. The worst case running time is $O(N^{K+1})$, which is the same as enumeration. I hope the authors can explain more where and why their algorithm is more efficient than enumeration in practice. It is also possible that I missed something. Methods And Evaluation Criteria: No, I did not fully understand the advantage of their algorithm compared to the enumeration. The evaluation with the state-of-the-art BnB algorithm is confusing. I don't understand the claim that the BnB algorithm provides wrong solutions. Is it due to the early stop of the BnB algorithm? Theoretical Claims: Yes, I checked the paper carefully, but I am quite confused about the algorithm. See the questions above. Experimental Designs Or Analyses: Yes, but the significant details about the specific implementation of their EKM algorithm are not provided or not clear. Supplementary Material: I tried to check the supplementary material, but they only included the codes and no clear appendix guide on it. It is very hard to go through all codes. Relation To Broader Scientific Literature: Due to the confusion which I mentioned above, it is not clear to me what the key contribution of the paper is. While it may potentially have an impact on more efficient exact solution in a beyond worst case manner, if the authors could answer the above questions. Essential References Not Discussed: The related works about the k-medoids problem are discussed adequately in the paper. But, although it mentioned some related works on shortcut fusion, it didn't clearly explain the main idea in a self-contained way. Other Strengths And Weaknesses: Strengths: 1. The algorithm is derived through a formal, structured approach, ensuring correctness. 2. The paper provides empirical results comparing EKM with BnB algorithm and approximate algorithms like PAM. Weaknesses: 1. The worst-case complexity is identical to brute-force enumeration, undermining the claimed advantage of EKM. Although the author mentioned that for practical instances their algorithm is more efficient, there is no detailed analysis for this claim. 2. The explanation for why BnB methods produce incorrect solutions is unclear. The scalability of EKM is not demonstrated beyond small datasets. Other Comments Or Suggestions: Potential things to strengthen the paper: 1. Provide a clearer distinction between EKM and brute-force enumeration. 2. Include a discussion on how EKM could be parallelized for practical scalability. 3. Illustrate why BnB methods sometimes produce incorrect solutions. Questions For Authors: I hope the authors could answer questions above about the claim clarity and the evaluation of the BnB algorithm. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: > 1. Claims And Evidence, "However, it is not clear .." Thank you for raising this question. Our generator’s efficiency stems from two key advantages: First, as shown in Section 2.3 (lines 169–185, right panel), we organize combinations of the same size into a single list, stored in contiguous memory (e.g., an array). Operations on these contiguous memories will decrease the possibility of cache misses, which makes our program much more efficient compared with the list-based generation methods. The generator based on a list is inefficient because of the way values in a list (such as a single-linked list) are stored somewhere in the memory and it holds a pointer telling you where to find the next one. Modern processors don’t fetch memory one byte at a time; they grab chunks called cache lines—typically 64 bytes or so—because memory access is slow, and fetching a bunch at once hides that latency. The lexicographical generator suffers from the same problem, and it is even less efficient because it does not generate configurations from subconfigurations (similar to the principle of optimality). A comparison between different generators was listed in response 5 to reviewer 5Npd To our knowledge, this array-based sublist generator is novel—unlike common list-based approaches or one-by-one lexicographical enumeration strategies (e.g., Kreher & Stinson, 1999; Knuth, TAOCP, Vol. 4, Sect. 7.2.1.3). Second, our generator uses a divide-and-conquer recursion, unlike the sequential recursion of classical generators, making it more amenable to parallelization. > 2. Methods And Evaluation Criteria, "The evaluation ...Is it due to the early stop of the BnB algorithm?" No. Let’s clarify what we mean by "wrong" and "approximate" solutions. Assume lower is better. A wrong solution refers to one that is lower than the global optimal solution because, by definition, no algorithm can produce a solution better than the optimal one. An approximate solution can be exact but will probably be greater than or equal to the exact solution. Thus, an early stop of any correct optimal/exact algorithm will only result in an approximate solution, not a wrong one. However, Ren’s algorithms frequently produce wrong solutions (datasets LM, UKM, LD, VC, Wine, Yeast, IC, WDG) that are lower than our global optimal solution. > 3. Relation To Broader Scientific Literature, "Due to the confusion .." The combination generator we designed will have many applications beyond the K-medoids problem. See responses 2 and 3 to reviewer 5Npd for the relation to broader scientific literature. > 4. Weaknesses 1, "The worst-case complexity is.." This question was answered in the Discussion section, lines 355–374, and in the same point addressed in responses 3 and 6 to Reviewer pwNf. In short, the state-of-the-art BnB algorithm shows exponential complexity in this setting, making it even less efficient than the simple exhaustive algorithm in terms of worst-case complexity. The difference between our generator and others was explained above. We're unsure what the reviewer means by "no detailed analysis" for our efficiency claim. We discuss the superiority of our generator as an array-based method and inherently embarrassingly parallelizable in lines 169-185, and provide empirical analysis to support this (Fig.3). > 5. Weaknesses 2, "The explanation for why BnB methods... " See responses 3 and 4 to Reviewer pwNfs. Our algorithm handles the largest datasets reported for finding exact solutions, far surpassing previous literature. > 6. Suggestions 1. Provide a clearer distinction between EKM and brute-force enumeration. Due to space constraints, we chose the most straightforward way to highlight the difference: empirical results. Further explanation is provided in Response 1. > 7. Suggestions 2. Include a discussion on how EKM could be parallelized for practical scalability. Due to space constraints, we omitted a detailed discussion. The parallelizability is evident from the definition of EKM in Eq. (10), where EKM (xs ∪ ys, k) is solved by first addressing two subproblems—EKM (xs, k) and EKM (ys, k)—and then merging solutions using "sel_E,k . conv". The subproblems EKM (xs, k) and EKM (ys, k) can be processed independently on separate processors without inter-process communication, and smaller subproblems can be refined recursively. This makes our algorithm embarrassingly parallelizable by definition, offering a theoretical P-fold speedup with P processors. > 8. Suggestions 3. Illustrate why BnB methods sometimes produce incorrect solutions. See Response 4 to Reviewer pwNf. --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed responses. The two key advantages of the proposed algorithm are (1) more efficient access to the memory in the enumeration; and (2) efficient parallelization due to divide and conquer. These two system-level optimization tricks seem to have already been widely used in practice. After reading the response about the wrong solution given by BnB algorithms, it is still not very convincing to me. The explanations are all empirical, without theoretical insights about this wrong solution. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their detailed feedback. While we acknowledge that system-level optimization techniques—such as D&C method and memory management through alternative algorithmic approaches—are widely utilized across various domains, their specific application in our context merits further consideration. However, to the best of our knowledge, no prior D&C algorithm exists for EKM that guarantees exactness. Designing a D&C algorithm *while ensuring it delivers an exact solution* is a challenging task, requiring significant effort to rigorously prove its correctness. Simply employing D&C recursion is fundamentally different from developing an exact D&C algorithm, as the latter demands a formal proof to establish that the optimal solution is always obtained. This point also relates to the broader question of “how to verify an algorithm’s exactness.” Proving exactness is inherently challenging, as it demands a demonstration that all possible configurations are exhaustively enumerated and that only the optimal solution is selected. This is precisely the guarantee that our EKM algorithm provides. In contrast, disproving an algorithm’s correctness is comparatively straightforward: a single counterexample suffices. This explains our reliance on empirical evidence to demonstrate that Ren's BnB algorithm produces incorrect solutions. While the reviewer finds our empirical explanations unconvincing and seeks theoretical insights into these errors, we argue that investing significant effort to uncover the specific bugs or theoretical flaws in the BnB algorithm would be a digression. Such an endeavor is neither practical nor a valuable contribution to knowledge. The key takeaway is that the BnB algorithm fails to consistently deliver exact solutions, as evidenced by our experiments. Rather than dissecting its shortcomings, we propose that starting with a provably correct algorithm—such as ours—offers a more constructive path forward. We hope this clarifies the novelty of our contribution and the rationale behind our evaluation approach. We welcome further suggestions to strengthen our presentation.
Summary: The paper presents a recursive enumeration algorithm (EKM) for solving the K-medoids problem exactly. The proposed method guarantees global optimality and runs in worst-case O(NK+1) time. Its main contribution is a formal derivation of the algorithm using algebraic programming techniques, including a shortcut fusion method for recursive combination generation. The authors compare the EKM algorithm with both state-of-the-art branch-and-bound (BnB) methods (e.g., Ren et al.) and classical heuristics (such as PAM and CLARANS) over several datasets. Claims And Evidence: See strength and weakness Methods And Evaluation Criteria: See strength and weakness Theoretical Claims: The claim for polynomial complexity is too strong. Indeed, the algorithm will also consider K in the runtime analysis. From the paper Ren et al 2022, they proved that their algorithm only consider cluster center variables for branching which the scalabiliy is unrelative to the number of samples. Their runtime complexity would probably only consider K, D for BnB process and number of subproblems for each BnB node calculation (which is $O(N2^{KD})$). If the author consider K and D are fixed value, then Ren's paper would also be claimed as a polynomial complexity algorithm. Experimental Designs Or Analyses: The experimental has some fatal mistakes when running with current optimal k-medoid clustering SOTA. In Ren et al, 2022, their provided testing datasets have last column with data label, see iris data file in their repo. The conclusion from the experiment maybe because the author didn't feed the correct data format to them. Therefore, the comparison is not trustable. Supplementary Material: Yes, all code and dataset but didn't run it. Relation To Broader Scientific Literature: global optimal k-medoid clustering algorithm Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: The paper uses a rigorous algebraic approach (via the Bird-Meertens formalism) that underpins the correctness of the algorithm. The explanation regarding the fusion of evaluation and recursive generation is clearly presented, offering valuable insight for researchers interested in formal algorithm design. Weaknesses: 1. While the algorithm is described as having polynomial complexity in N when K is fixed, the complexity O(NK+1) is exponential in (N,K). This may lead to misinterpretations, especially since moderate values of K (e.g., 8–10) quickly render the method impractical. 2. The paper reports that the BnB approach (Ren et al.) sometimes produces solutions lower than the exact bound, but does not sufficiently investigate whether this is due to misconfiguration, a re-implementation error, or other factors. 3. The experiments are restricted to datasets with K ≤ 5 and moderate dimensionality. There is little discussion regarding scalability for higher K values or the impact of using different distance metrics. 4. The emphasis on theoretical polynomial time is not adequately balanced with a practical discussion of average-case performance or potential heuristics that could reduce the search space. Other Comments Or Suggestions: The paper acknowledges that the EKM algorithm has exponential time and space complexity in K, which limits its practical applicability for problems requiring a large number of medoids. Moreover, the experimental validation is confined to datasets with relatively small K and moderate dimensions. The comparison with the BnB approach may also be biased if the latter is not properly tuned or implemented. Questions For Authors: 1. Can the authors clarify under which conditions the “polynomial time” claim holds, and provide empirical or theoretical bounds for moderate values of K? 2. What measures were taken to verify the reported failures of the BnB method? For example, was the original implementation used, or was it re-implemented? Were parameters such as pruning criteria and time limits appropriately tuned? 3. It would strengthen the paper to include experiments on datasets with higher dimensions or using alternative distance metrics, to better assess the robustness and scalability of the algorithm. 4. Can the authors discuss potential improvements via advanced bounding techniques or parallelization strategies to mitigate the exponential dependency on K? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: > 1. Theoretical Claims We disagree with the reviewer’s claim that Ren's algorithm is polynomial. Our experiments in Sec 3.2 show that runtime for the UK, BM, and Seeds datasets—despite similar sizes—varies widely. Subsampling the UK dataset further reveals exponential runtime growth. While the complexity of a single B&B node may indeed be polynomial, our experiments provide empirical evidence that, in the worst case, the number of BnB nodes grows exponentially. As Ren noted in their paper, "the number of BnB nodes to converge is hard to predict since the efficiency of bounding methods depends on datasets." > 2. Experimental Designs The reviewer may misunderstand our setup. Clustering is an unsupervised learning task, so labels in datasets like Iris are irrelevant to the algorithm and should not be used. Ren explicitly states in their experiment that the Iris dataset is 4D, excluding the label column. To ensure accuracy, our experiments ran Ren's original code directly without modification and replicated their reported results. > 3. Weaknesses 1 As stated in lines 364–365, the K-medoids is NP-hard, so K in the exponents is unavoidable unless P=NP. This challenge is not unique to our algorithm; it affects all other exact algorithms For instance, here are some results by using different K in Ren's algorithm: Iris: 35.3s(K=2), 503s(K=3), 3h-time limit (K=4); HEMI: 40s (K=2), 586s(K=3), 2.01h(K=4); Glass: 22.8s(K=2), 41.4s(K=3), 652s(K=4); LM: 21.7s (K=2), 31.0s(K=3), 142s (K=4); These are approximate solutions, and the growth doesn’t seem polynomial. Moreover, no prior work on K-medoids has obtained ***exact solutions*** for sufficiently large datasets with K=8-10 medoids. Approximate solutions exist, but they are irrelevant for applications needing exactness—our focus. For exact solutions, our algorithm outperforms prior methods, handling N=5000, while earlier results were limited to N=150. > 4. Weaknesses 2 Thanks for the reviewer's suggestion, but find the issue in Ren’s code is tough due to its poor structure and lack of documentation. For example, in 'bb_functions', extending 'time_lapse' doesn’t work, and setting 'mingap' or 'tol' to zero—meant for exact solutions—sometimes returns similar results or fails to stop, even when the optimum is found. Also, their pseudocode is different from the implementation, making it difficult to do unit tests. Without clearer code, it’s hard to tell if the problem is in theory or just bugs > 5. Weaknesses 3 > 6. Questions 3 The K-medoids problem’s complexity doesn’t scale with dimensions in big-O terms. As noted in lines 250-262, we can precompute pairwise distances into a matrix in O(D·N^2) time, and the total sum of distances is calculated in O(K·N). So testing higher dimensions, while interesting, isn’t the key focus of this paper. Our algorithm supports any distance metric meeting the fusion condition (Eq. 8), including various non-negative objectives (defined in the form below Eq. 1) Question of using K<=5 is answered in points 3 and 6 > 6. Weaknesses 4 Our paper targets exact solutions, our main claim. So comparison should base on exact solutions, not approximate ones. This is critical for applications require only exact solutions (see response 4 to reviewer 5Npd). As noted in point 3, our basic algorithm already tackles the largest datasets in the literature, outperforming state-of-the-art exact methods Heuristics could boost average performance, but they don’t cut worst-case complexity, as no acceleration occurs in the worst case. Our contribution lies in that a plain version of the algorithm still outperforms existing exact methods, though we agree future work could explore these practical speedups > 7. Question 1 The polynomial time holds for the K-medoids problem with a fixed K, as defined in Eqs. (1) and (7). Under this condition, our algorithm achieves polynomial complexity, while state-of-the-art BnB methods exhibit exponential worst-case growth, as shown in our empirical analysis. > 8. Question 2 To ensure a fair comparison, We used Ren's original implementation without modification. For Ren's algorithms, tuning is minimal—only “elapse time” and “min_gap” could be adjusted. However, these parameters are irrelevant for exact solutions: introducing them will yield approximate solutions. Also, as noted earlier, these parameters are either useless or simply not converge, even if the algorithm has found the optimal solution. > 9. Question 4 Yes, as Bird et al have shown, monotonic relations, like upper-bound pruning in (He et al), can yield order-of-magnitude speedups by eliminating non-optimal solutions early. Similarly, a prefix-closed filtering process, for instance, enforcing minimum cluster sizes, could enhance efficiency beyond our current implementation. Also, as a D&C method, our algorithm is trivially parallelizable, so no inter-processor communication is required. This yields a theoretical P-fold speedup with P processors. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. In Ren's github repo, the dataset of iris has 5 column, and they claim they run the dataset has 4 features. What iris dataset did you run? 4 column or 5 column? Should you ask them to verify or make sure there's no setting issue? --- Reply to Comment 1.1.1: Comment: Thank you for raising this insightful question regarding the Iris dataset used in our experiments, as it helps ensure the validity and reproducibility of our results. The reviewer correctly notes that the Iris dataset in Ren’s GitHub repository contains 5 columns (4 features plus 1 label column), while their work claims to operate on a dataset with 4 features. To address this, we have re-evaluated our experiments using the Iris dataset in two configurations: (1) the unlabeled version (4 feature columns) and (2) the labeled version (5 columns, downloaded directly from Ren’s GitHub repository). Below, we present the updated results for both our EKM algorithm and Ren’s algorithm on these datasets: - EKM Algorithm: - Unlabeled Iris (4 features): 83.9600 - Labeled Iris (5 columns, from Ren’s repository): 92.3600 - Ren’s Algorithm: - Unlabeled Iris (4 features): 73.0400 - Labeled Iris (5 columns): 83.9100 From these results, we observe that the objective value of Ren’s algorithm on the labeled Iris dataset (83.9100) is strikingly close to that of our EKM algorithm on the unlabeled Iris dataset (83.9600). Initially, we hypothesized that this similarity might stem from numerical precision issues, as including labels in a clustering task is unconventional and counterintuitive. However, the results suggest a more nuanced interpretation. Two possibilities emerge: 1. **Ren’s algorithm is exact and internally excludes the label column.** In this case, the slight discrepancy between Ren’s result (83.9100) and EKM’s result (83.9600) on the unlabeled Iris dataset could indeed be attributed to numerical issues or implementation differences. 2. **Ren’s algorithm does not exclude the label column.** If this is true, the result for the labeled Iris (83.9100) reflects the algorithm’s performance on the 5-column dataset, while the result for the unlabeled Iris (73.0400) represents its performance on the 4-feature dataset. Notably, both results (73.0400 for the unlabeled Iris and 83.9100 for the labeled Iris) produced by Ren’s algorithm are lower than the exact solutions provided by EKM (83.9600 and 92.3600, respectively). This suggests that Ren’s algorithm again produces an incorrect solution, as its objective values are consistently below the exact solutions obtained by EKM. Given that clustering inherently operates on feature data and not labels, the second scenario provides further evidence that Ren’s algorithm may fail to consistently deliver exact solutions for the Iris dataset, as it yields an objective value below the known optimum. We believe this analysis strengthens our original claim regarding the superiority of EKM over Ren’s approach, and we welcome further discussion or suggestions to refine our evaluation.
null
null
null
null
null
null
Generative Intervention Models for Causal Perturbation Modeling
Accept (poster)
Summary: This paper presents Generative Intervention Models (GIMs), a causal modeling framework designed to predict the effects of perturbations in complex systems with unknown underlying mechanisms. GIMs establish a mapping between perturbation features and a distribution over atomic interventions within a jointly inferred causal model. This method facilitates the prediction of distribution shifts caused by unseen perturbations. The authors assess GIMs using synthetic datasets and single-cell RNA sequencing (scRNA-seq) drug perturbation data, demonstrating that GIMs outperform traditional causal inference methods in structure and intervention target identification while maintaining predictive accuracy comparable to black-box machine learning models. Claims And Evidence: The effectiveness of the proposed method is empirically well supported. However, it relies on black-box modeling and lacks theoretical guarantees or bounds on the correctness of its predictions. This limitation may restrict its applicability in real-world settings. Methods And Evaluation Criteria: The authors conducted evaluations across various settings, including synthetic and scRNA-seq datasets. The evaluation criteria are well-chosen and provide detailed insights into the method's effectiveness. One experimental limitation is the lack of analysis on larger-scale settings with hundreds of thousands of nodes. (Authors subsample 50 genes for scRNA-seq experiments.) It raises the question of whether the method faces computational challenges when applied at such a scale. Theoretical Claims: This paper does not contain any theoretical claims for their method. Experimental Designs Or Analyses: The experimental design is sound. Some limitations regarding experiments are described in the evaluation section above. Supplementary Material: I reviewed the supplementary material regarding the experimental setup and dataset description. Relation To Broader Scientific Literature: The key contributions of this paper are applicable to a broad range of scientific domains, particularly in the biological field. Essential References Not Discussed: Overall, the paper presents key literature on causal modeling. However, since I am not familiar with generative intervention modeling, I am unable to assess whether the paper includes adequate baselines for comparison. Other Strengths And Weaknesses: Strengths: - Provides a comprehensive overview of existing literature. Weaknesses: - The technical novelty is limited, as it adopts an existing Bayesian structure learning framework, e.g., Brouillard et al., "Differentiable Causal Discovery from Interventional Data," NeurIPS 2020. Other Comments Or Suggestions: . Questions For Authors: - What is the computational complexity of the proposed method? - Is the method applicable to complete gene datasets containing thousands of genes? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed comments and feedback. We appreciate that you recognize the strong empirical results of the work and the potential impact in scientific domains such as biology. You raised several important points that we address in detail below. Please let us know if you have any remaining concerns or questions, and we would be happy to clarify further. > it relies on black-box modeling and lacks theoretical guarantees or bounds on the correctness of its predictions. While our approach leverages neural networks, it is specifically designed *not* to be a purely black-box predictor. GIMs are built on a causal modeling framework with interpretable, mechanistic components: a structural causal model $\mathcal{M}$ and interventions $\mathcal{I}$, which model how perturbations manifest in $\mathcal{M}$. This allows us to not only predict the effect of a perturbation, but also understand qualitatively how it alters the system’s underlying causal structure and mechanisms. While a full theoretical analysis is beyond the current scope, existing identifiability results from causal discovery with unknown interventions (e.g., Brouillard et al., 2020) apply to our setting under standard assumptions. We discuss this in detail in our response to Reviewer 8P9S. > technical novelty is limited, as it adopts an existing Bayesian structure learning framework, e.g., Brouillard et al. GIMs introduce a new modeling framework that uses causal models to predict the effects of previously unseen, general perturbations, a task that standard causal discovery methods are not designed to handle. While we build on existing techniques for inferring the causal model $\mathcal{M}$ (e.g., prior design and gradient-based optimization), we introduce an explicit model of how perturbation features induce interventions in the causal model. Specifically, GIMs learn a shared generative mechanism that maps observable perturbation features $\gamma$ to distributions over latent interventions, enabling generalization to entirely new perturbations. In contrast, methods such as Brouillard et al. (2020) identify the unknown intervention targets for each training perturbation without incorporating their features, typically with the goal of recovering the causal graph. This limits their use to scenarios where the goal is recovering the causal graph and predicting effects for known perturbations, making them unsuitable for predicting the effects of new, unseen perturbations. > What is the computational complexity of the proposed method? The computational complexity of GIMs is comparable to prior causal modeling approaches such as Hägele et al. (2023) and Brouillard et al. (2020) since we use similar causal model classes. The only additional overhead introduced by our framework is the forward pass through the GIM MLP, which does not affect the asymptotic complexity. In terms of wall-clock time, for a nonlinear system with 20 variables (as in Figure 2B), training a full GIM takes on average 20.71 ± 0.33 minutes on a GPU, averaged over 20 random datasets. For reference, BaCaDi* takes 11.56 ± 0.11 minutes under the same conditions. Formally, the computational complexity of learning GIMs is: $$ \mathcal{O}\left( T\cdot \left[ d^2 \cdot \left(p_z + H_{\mathcal{M}} + n_{\text{MC}} n_{\text{power}}\right) + n_{\text{MC}} \cdot n_{\text{total}} \cdot d \cdot L_{\mathcal{M}} \cdot H_{\mathcal{M}}^2 + K \cdot \left( L_{\text{GIM}} \cdot H_{\text{GIM}}^2 + H_{\text{GIM}} \cdot (p + d) + n_{\text{MC}} \cdot d \right) \right] \right), $$ where: - $T$: number of training steps, - $d$: number of variables in $\mathbf{x}$, - $p_z$: rank-parameter of $\mathbf{Z}$, - $n_{MC}$: number of MC samples, - $n_{\text{power}}$: number of power iterations used in NO-BEARS acyclicity, - $n_{\text{total}}$: total number of samples across environments, - $L_{GIM}, L_{\mathcal{M}}$: depth of GIM MLPs and causal mechanisms, respectively, - $H_{GIM}, H_{\mathcal{M}}$: width of GIM MLPs and causal mechanisms, respectively, - $p$: dimension of $\gamma$, - $K$: number of perturbation environments. We added this to the supplementary material of our manuscript. > method applicable to complete gene datasets containing thousands of genes? Like comparable causal modeling approaches, GIMs do not scale easily to very large systems. Since our focus is on introducing a new modeling framework for perturbation effects, scalability is not a primary concern – in particular, because our experiments on scRNA-seq data demonstrate that GIMs can achieve strong predictive performance in real-world settings, even when applied to a subset of genes. That said, we agree that scalability is important and may be improved in future work. Approaches developed for scaling causal models, for example modeling the graph $\mathbf{G}$ as a low-rank factor graph (Lopez et al., 2022), can be naturally integrated into our framework by modifying the causal model $\mathcal{M}$ accordingly. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed rebuttal! The detailed analysis of computational complexity and scalability is informative. Based on the rebuttal, I have updated my score to weak accept.
Summary: This paper studies the problem of predicting the unseen perturbation effect in a causal model. The authors propose the Generative Intervention Model (GIM) framework to learn the relationship between perturbation and distribution shift in a causal model. It is claimed that the GIM can predict perturbation features that do not show up in the training data. Detailed experiments on synthetic data and scRNA-seq are done to showcase the good performance of the GIM methods compared to other causal inference methods. Claims And Evidence: Claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: The paper uses synthetic data and scRNA-seq datasets in the experiments. The entropy-regularized Wasserstein distance between predicted and ground-truth distribution and the Euclidean distance between the mean are used as metric. Theoretical Claims: I did not check the derivations in Appendix B carefully, but they seem correct. Experimental Designs Or Analyses: The experiments are well-designed. Implementation details are provided. Supplementary Material: I did not go through the whole appendix. Relation To Broader Scientific Literature: Several related works are mentioned in the Introduction part Essential References Not Discussed: Relevant works are discussed. Other Strengths And Weaknesses: Strengths: The proposed GIMs leverage perturbation features to predict distribution shifts in unseen conditions, which enables to model to generalize beyond seen distribution. Experimental results show that GIMs outperform classical approaches in identifying causal structures and intervention targets in nonlinear systems. Weaknesses: Jointly estimating both the causal model and the generative intervention mapping seems to be a complex optimization problem, which may be computationally expansive. Other Comments Or Suggestions: It would be good if the authors could add a concrete real-world example in the third section to illustrate the terminologies. It took me a while to understand all the notations. In particular, explanations of $\gamma$ and how it may influence intervention are needed. Besides, i am still a bit confused about what is in the dataset $\mathcal{D}$. Is intervention $I$ in the dataset? Questions For Authors: Thanks for the great work! It will be great if the author can clarify the following questions. 1. As I mentioned above, can you explain the formulation more clearly? In particular, what are known and what are unknown? 2. The authors also mention in the paper that [1] uses a similar Bayesian method for structure learning. Can you explain the similarity and difference between your work and [1]? 3. Can the author explain a bit about how acyclicity is ensured when sampling from the prior? I also wonder if the authors can provide the code in a later version. [1] Lorch, Lars, et al. "Dibs: Differentiable bayesian structure learning." Advances in Neural Information Processing Systems 34 (2021): 24111-24123. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed comments and feedback. We are glad to hear that you find our work to be supported by clear and convincing evidence, as well as our experiments to be well designed. You raised several important points that we address in detail below. > Jointly estimating both the causal model and the generative intervention mapping seems to be a complex optimization problem, which may be computationally expansive. The computational complexity indeed increases compared to settings, where only the causal model and intervention targets are directly inferred (e.g. BaCaDi (Hägele et al., 2023)). However, in practice, we find the overhead to be manageable. For example, in the 20-node nonlinear setting (as used for Figure 2B) and training on a GPU, BaCaDi* takes on average 11.56 ± 0.11 minutes, while GIM takes 20.71 ± 0.33 minutes. Both results are averaged over 20 random datasets. > add a concrete real-world example [....] & explain the formulation more clearly? In particular, what are known and what are unknown? A concrete real-world example is drug perturbation experiments, such as those on scRNA-seq data that we present in Section 6, Figure 5. We perform $K$ experiments, where in each one a drug is applied to cells and we measure their responses - typically via its gene expression profiles $\mathbf{x}$. In experiment $k$, we observe the response of $n_k$ cells, yielding $n_k$ samples of $\mathbf{x}$, denoted as $\mathbf{X}^{(k)}$. The perturbation features $\gamma^{(k)}$ capture observable drug properties, such as identity, molecular characteristics, or dosage. The full dataset $\mathcal{D}$ consists of $K$ pairs of drug features and responses: $(\gamma^{(k)}, \mathbf{X}^{(k)})$ for $k = 1, \dots, K$. Our goal is to predict the response of a new drug $\gamma^*$, i.e. to predict the gene expression under this new drug. The *latent* causal model $\mathcal{M}$ defines the data-generating process of $\mathbf{x}$. A perturbation $\gamma^{(k)}$ influences this system through an *unobserved* intervention $\mathcal{I}^{(k)}$. The distribution over these interventions, conditioned on $\gamma^{(k)}$, is parameterized by *latent* parameters $\phi$, which are shared across all experiments and enable generalization to new perturbations. In summary, we observe $K$ pairs of $(\gamma^{(k)}, \mathbf{X}^{(k)})$. The causal model $\mathcal{M}$, the interventions $\mathcal{I}^{(k)}$, and the parameters $\phi$ are unobserved and jointly inferred. We added this clarification in line 134. > similarity and difference between your work and [Lorch et al. (2021), Hägele et al. (2023)]. GIMs introduce a new modeling framework that uses causal models to predict the effects of previously unseen, general perturbations, a setting that causal discovery methods like Lorch et al. (2021) and Hägele et al. (2023), are not designed to handle. While we adopt a similar prior design for the causal model $\mathcal{M}$ and use gradient-based inference, we introduce a fundamentally different approach to modeling perturbations. Lorch et al. assume that the intervention $\mathcal{I}$ is observed for each perturbation; unlike GIMs, their method cannot be applied when interventions are unknown. Hägele et al. infer interventions for each training perturbation, but treat each perturbation independently, without leveraging any features. As a result, their model cannot generalize to novel perturbations beyond those seen during training. In contrast, we propose to learn how interventions are induced by the observable perturbation features $\gamma$. Specifically, GIMs learn a shared generative mechanism that maps observable perturbation features $\gamma$ to a distribution over latent interventions, enabling generalization to entirely new, unseen perturbations. Thus, while prior approaches focus on recovering the causal graph or inferring training interventions, GIMs are designed for the task of predicting the effects of new perturbations. > how acyclicity is ensured when sampling from the prior? We enforce acyclicity via constrained optimization using the augmented Lagrangian approach, as introduced in Zheng et al. (2018) and also used in related works, such as Brouillard et al. (2020). While this encourages acyclic graphs during training, there is no hard guarantee that the estimated graph is cycle-free. In practice, we randomly break cycles if they occur. > provide the code in a later version We will provide the full code in the camera-ready version. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed explanation. It is much clearer to me now. I suggest briefly mentioning the drug example in section 3 to help the reader understand the setting better.
Summary: The authors considered the problem of predicting the impact of interventions with applications in gene perturbation prediction. In particular, in some applications, when an intervention is performed, it is unknown which variables are intervened on. However, some features of the intervention might be known. The authors motivated this by an example in genomics, where features of a drug might be known but it is not clear what is the causal effect of a drug on regulatory pathways in a cell. The authors considered a causal modeling approach so that the outputs are easily interpretable. More specifically, they trained a generative model to map the features of a perturbation to a distribution over atomic interventions in the system. The experimental results showed that for the task of causal discovery or intervention target identification, the proposed method has better performance compared to previous work. Moreover, for predicting the effect of perturbation in out-of-distribution settings, the proposed method achieved similar performance compared to methods that do not use causal modeling in their architectures. ### Update after rebuttal I increased my score to 3 after the discussion on the theoretical guarantees. However, as I mentioned before, the current paper is somehow a natural extension of Lorch et al. (2021) and Hägele et al. (2023), particularly in scenarios where intervention targets are not given or where features are not used to locate intervention targets. Therefore, I am not sure about the technical novelty in the current work and not giving a higher score. Claims And Evidence: The work is more experimental although it also provides proof for the derivations of some of the equations in the appendix. It seems that the authors made a good comparison with previous work. Methods And Evaluation Criteria: They considered several metrics for comparison such as SID and Edge-F1 for causal discovery or $W_2$ and mean distance for the task of predicting the impact of perturbations. Moreover, they considered both synthetic and real datasets in their evaluations. Theoretical Claims: The work is experimental and there is no theoretical claim in the paper. Experimental Designs Or Analyses: I did not check the codes but did read the experiment section. Based on the text, it seems that the experimental results are sound and the authors also provided some explanations for the plots. Supplementary Material: I just checked Appendix D regarding additional experimental results. Relation To Broader Scientific Literature: One of the main applications of causal inference/discovery is in biology such as learning causal structures in gene regulatory networks or predicting the results of gene perturbations. This work is aligned with this line of research and aims to predict the effect of intervention when it is unknown which variables are intervened by giving the treatment. Essential References Not Discussed: As far as I checked the related work, the authors cited the most relevant previous work. Other Strengths And Weaknesses: Strengths: - The authors proposed a method to train a causal generative model that is more interpretable (for instance, checking which variables are intervened) compared to previous work, especially the unstructured models. - They performed comprehensive experiments in various settings, showing that the proposed method is better or on par with previous works. Weaknesses: - The main weakness is that there is no theoretical guarantee in the work. That being said, I should emphasize that this comment is also applied to some of the previous work. - In deriving the objective function of the optimization problem, several design choices/approximations are considered. It is not clear what are the impacts of such approximations and why it is fine to consider them. Other Comments Or Suggestions: It would be nice to study whether some theoretical guarantees can be added to the work in some specific settings such as linear settings. Moreover, it is good to characterize how the contained information in perturbation features $\gamma$ affects perturbation prediction. For this purpose, some simple settings such as bivariate models might be considered to facilitate the analysis. Questions For Authors: - In line 184, what is $p$ in the definition of the domain of $Z$? What are $z_{0i}$ and $z_{1j}$ in line 190? Please explain in more detail the modeling in line 190? - In eq. (11), please explain more about the design choice for target sparsity. - In the experiments, it was observed that the proposed method misses an intervention on a variable but predicts an intervention on its parent or ancestor. It is good to explain this observed phenomenon. - It seems that parts of the proposed method were borrowed from previous work such as Lorch et al. (2021) or Hagele et al. (2023). It is good to mention that other than the generative process of $I$ given $\gamma$, what are the main contributions compared to previous work? - I think the performance of the method highly depends on $M$. It is good to study experimentally the effect of $M$ on the performance. I am geussing $M$ should be very large in order to have reasonable performance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed feedback. We are glad to hear that you find our problem setting relevant, the experiments comprehensive, and the interpretability of our approach valuable. You raised several important points that we address below. Please let us know if you have any remaining concerns or questions, and we would be happy to clarify further. > no theoretical guarantee [...] I should emphasize that this comment is also applied to some of the previous work. We propose a new modeling framework that generalizes to unseen perturbations and demonstrates strong empirical performance on simulated and real-world data. While a full theoretical analysis is beyond the scope of this paper, we agree that identification guarantees are important. Below, we describe how existing results apply directly to GIMs, which allow contextualizing our results. We added these explanations to line 180. Under standard assumptions, the MAP optimizers in GIMs identify the true intervention targets of the training perturbations, $\mathbb{I}$, and a graph $\mathbb{I}$-Markov equivalent to the true one. To establish this, we extend Theorem 2 from Brouillard et al. (2022). They prove that, under standard assumptions and with sufficiently small regularization, the score $\mathcal{S}(\mathbf{G},\mathbb{I})$ is maximized by the true intervention targets and an $\mathbb{I}$-equivalent graph. Assuming the GIM prior distributions have global support, we recover the same maximizers in the large-sample limit, because the posterior is dominated by the likelihood: $\arg\max_{\mathbf{G}, \mathbb{I}} \mathcal{S}(\mathbf{G},\mathbb{I})=\arg\max_{\mathbf{G},\mathbb{I}}\sup_{θ}\log p(θ, \mathbf{G},\mathbb{I}\mid\mathcal{D})$. Assuming that there exists a mapping from features to the true targets and it can be expressed by $\phi$, we have: $\max_{\mathbf{G}, \mathbb{I}}\sup_{θ}\log p(θ,\mathbf{G},\mathbb{I}\mid\mathcal{D})=\max_{\mathbf{G},\phi} \sup_{θ}\log p(θ,\mathbf{G},\phi\mid\mathcal{D})$. The MAP optimizers recover $\mathbb{I}$ and a graph $\mathbb{I}$-Markov equivalent to the true one. Identifiability of interventions for unseen perturbations depends on the informativeness of $γ$, and we empirically show the impact on predictive performance in Figure 4. > design choices/approximations are considered [...] why it is fine to consider them. [....] explain more about the design choice for target sparsity We approximate expectations in the prior and likelihood using Monte Carlo sampling, and we use the Gumbel-Softmax relaxation for differentiable sampling of discrete variables. The MC estimators are consistent, while the Gumbel-Softmax introduces a bias that vanishes as the sigmoid temperature $τ \to 0$. In practice, we fix $τ$. These approximations are widely used in the structure learning literature. Our prior design follows standard causal modeling assumptions, including sparsity and acyclicity for the graph and Gaussian priors over parameters for regularization. For the intervention targets, we use an L1-based sparsity-inducing prior, reflecting the assumption that typically only a few variables are affected per perturbation. This aligns with the sparse mechanism shift hypothesis, which suggests that interventions tend to induce localized changes (Schölkopf, 2022). All priors have full support; thus, in the large-sample limit, their impact diminishes. > what is $p$ [...]? What are $z_{0i}$ and $z_{1j}$ [...]? $p$ is a parameter controlling the rank of the matrices of $p(\mathbf{G}\mid Z)$. For $p\geq d$, this distribution can represent any adjacency matrix without self-loops. In all experiments, we set $p=d$. The tensor $Z$ consists of two $d \times p$ matrices; $z_{0i}$ and $z_{1j}$ refer to the $i$-th row of the first and the $j$-th row of the second matrix, respectively. We added this explanation in line 193. > [In Figure 2C, the] proposed method misses an intervention on a variable but predicts an intervention on its parent We show this example to discuss how GIMs behave in practice and how they may infer consistent but nevertheless incorrect interventions in certain cases. Here, the true intervention targets $X_i$, and in the true graph, $X_j$ is a child of $X_i$. The intervention shifts the distribution of $X_i$, which affects $X_j$ via $p(X_j \mid X_i)$. In the estimated graph, the edge is reversed and the model instead attributes the shift to an intervention on $X_j$. While this is incorrect under the true graph, it is consistent with the predicted graph and still describes the observed distribution. > main contributions compared to previous work Please see our response to reviewer feSa. > method highly depends on $M$ The predictive performance can be affected by the number of MC samples $M$. In all experiments, we used $M=128$. We additionally tested smaller $M$ in the nonlinear setting (Figure 3) and found no significant effect on accuracy: | |M=2|M=32|M=128| |---|---|---|---| |$W_2$|17.78 ± 5.35|17.69 ± 4.99|18.13 ± 4.33| --- Rebuttal Comment 1.1: Comment: Thank you for the responses. I suggest adding an explanation of the theoretical guarantees, as well as a discussion on the impact of M on the accuracy, in the revised version. I have adjusted my score accordingly. However, I still think that the current paper is somehow a natural extension of Lorch et al. (2021) and Hägele et al. (2023), particularly in scenarios where intervention targets are not given or where features are not used to locate intervention targets.
Summary: This paper studies the problem of causal perturbation modeling to recover the causal structure and intervention targets given perturbation features from several interventional environments. The use-case in this paper is the gene perturbations in the biology domain. The authors propose generative intervention model (GIM) which learns to map perturbation features to atomic intervention vectors (1-sparse) in a jointly-estimated causal model. This modeling enables the ability to infer the effect of unseen perturbation features during inference. Thus, this approach is robust to distribution shifts in the feature perturbation space. Experiments are conducted on synthetic and scRNA-seq drug perturbation data to show the effectiveness of the proposed method. ## Update After Rebuttal I thank the authors for the detailed response to my questions and concerns. I believe this is an interesting work that relaxes the major assumption of having intervention targets available in causal representation learning. Furthermore, I believe the application in biology is quite interesting. Therefore, I lean towards **acceptance** of this paper. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, I checked the metrics used, including the Structural Intervention Distance (SID) and Wasserstein distance for evaluation. Furthermore, I checked the soundness of the data-generation with respect to the structural causal models and the empirical setup. Supplementary Material: Yes, I checked Appendices C and D for experimental details and additional results. Relation To Broader Scientific Literature: Overall, this paper tackles an important problem in causal generative modeling. Specifically, for domains such as biology, predicting the effects of unseen interventions in critical for scientific discovery. Along a line of work that focuses on causal models for biological applications, such as intervention design and causal representation learning, this work explores a different approach to interventional inference. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengths - Overall, the paper is written well with clear intuitions about the problem of causal intervention modeling. Furthermore, this paper seems to make weaker assumptions about access to interventional data, specifically that one would only have access to the perturbation features and not the entirety of the interventional data for all environments, which is a more realistic scenario in practice. - The application in single-cell gene perturbations is certainly an interesting use-case with great potential impact, especially for reasoning about hypothetical perturbations outside the support of the training distribution. - The partial OOD (novel dosages with seen targets) and fully OOD (novel targets with seen dosages) is a robust way to evaluate the effects of unseen perturbations. Specifically, the fully OOD setting shows promising results compared to other methods. ## Weaknesses - The overall objective can use more clarity. The idea seems to be to parameterize a network $\phi$ to infer the intervention target given the perturbation vector. To learn a joint causal model, one objective is to learn a latent variable $z$ to model the causal graph $G$ probabilistically. The overall goal is to maximize a log-likelihood with respect to the causal model and intervention prediction network over the dataset. Other Comments Or Suggestions: N/A Questions For Authors: - What is the main difference between this approach for extrapolating to unseen perturbations and the causal representation learning method proposed by Zhang et al? In this method, the authors show identifiability guarantees from soft interventions and recover the causal factors and their relationships up to an equivalence class given sufficient interventional data. - Intuitively, what is the meaning of the perturbation vector? Is this simply the perturbation that generates K different interventional environments? Is it a value vector that takes a different value across environments? - From my understanding, this setting does not explicitly require access to interventional data. Rather, one requires only a perturbation vector. Is this interpretation correct? - The authors claim that multiple interventions can also be supported by simply inferring each one separately from the perturbation features. However, how does this work in practice? Zhang et al. Identifiability Guarantees for Causal Disentanglement from Soft Interventions. NeurIPS 2023. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed comments and feedback. We are glad to hear that you find our approach addresses an important problem in causal generative modeling with an interesting use-case with great potential. You raised several important points that we address in detail below. > overall objective can use more clarity. We revised the manuscript in lines 95ff, 133ff, 154ff to further clarify. > What is the main difference between this approach for extrapolating to unseen perturbations and the causal representation learning method proposed by Zhang et al? Thank you for pointing us to the work by Zhang et al. (2023). Their setting differs from ours: their causal model is defined over latent (unobserved) variables, whereas GIM models interventions over observed variables (e.g., gene expression). This makes GIM’s predictions more interpretable, as it directly models causal effects between semantically meaningful variables. While it is unknown in both approaches how the perturbation modifies the underlying causal model, the two methods tackle this challenge in fundamentally different ways. Zhang et al. (2023) infer interventions for each perturbation in the training data, independent of any features. In contrast, we propose to learn how interventions are induced by the observable perturbation features $\gamma$. Since Zhang et al. (2023) do not incorporate such features, their model *cannot generalize to novel perturbations* beyond those seen during training, similar to Hägele et al. (2023). Zhang et al.’s approach is limited to combinations of previously observed interventions, while GIMs enable predictions for entirely unseen new perturbations by conditioning explicitly on $\gamma$. We also note that, because GIMs model the causal variables directly, known identifiability results can be extended to GIMs (see our response to Reviewer 8P9S). Overall, we view the two approaches as complementary, exploring different but important aspects of causal modeling with perturbations. > meaning of the perturbation vector $\gamma$ encodes any observable information about the perturbation applied in a given environment, such as drug properties and dosage in a drug perturbation experiment. For example, in the scRNA-seq setting, we define it as a one-hot encoding of the drug along with its dosage as the perturbation features. The perturbation vectors $\gamma$ indeed differ across experimental environments. For each environment (i.e., context or experiment), we have one perturbation vector $\gamma \in \mathbb{R}^p$, and a corresponding collection of samples of the observed variables $\mathbf{x}$, such as gene expression levels. In line 78, we adjusted the manuscript to clarify the meaning of $\gamma$ and that it differs across environments. > this setting does not explicitly require access to interventional data. Rather, one requires only a perturbation vector. Is this interpretation correct? Yes, your interpretation is correct. GIMs do not require access to interventional data in the sense of having knowledge of the atomic intervention $\mathcal{I}$ corresponding to a perturbation — that is, which variables in the causal model were directly manipulated and how. During training, we assume access to $K$ different perturbation experiments, where for each experimental context we observe perturbation features $\gamma$, along with samples of observed variables $\mathbf{x}$ collected under that condition. At test time, GIMs only require a perturbation vector $\gamma^*$ to predict the system’s response. This setup reflects many realistic biological settings, where intervention targets are unknown but perturbation metadata is available. > multiple interventions can also be supported by simply inferring each one separately from the perturbation features. However, how does this work in practice? Our approach handles multiple perturbations (i.e., environments) by inferring a distribution over interventions $\mathcal{I}$ — that is, intervention targets and parameters — for each perturbation individually based on its perturbation features $\gamma$. In practice, the generative intervention model consists of two neural networks, $g_\phi$​ and $h_\phi$​, which map $\gamma$ to the parameters of this distribution: $g_\phi$​ outputs Bernoulli probabilities over targets, and $h_\phi$​ outputs the mean and variance of a Gaussian over intervention parameters. Crucially, the parameters $\phi$ are shared across all perturbation environments. Given a different $\gamma^{(k)}$, the same model predicts a distinct intervention distribution for that specific perturbation. Combined with the causal model $\mathcal{M}$, this yields a full predictive distribution over the system variables $\mathbf{x}$. In short, multiple perturbations are handled by applying the shared generative model to each perturbation feature vector $\gamma$, enabling per-environment inference in a unified way.
null
null
null
null
null
null
Learning Safety Constraints for Large Language Models
Accept (spotlight poster)
Summary: The study proposes a geometric approach called SaP (Safety Polytope) for large language models (LLMs) to mitigate safety risks. SaP learns and enforces linear safety constraints directly in the model's representation space, identifying safe and unsafe regions. Experiments show it reduces adversarial attack success rates and provides interpretable insights into its safety mechanisms. Claims And Evidence: The claims are supported by the evidence. For example, SaP operates post-hoc in the representation space without modifying model weights. The approach relies on modifying the internal activations of LLMs rather than retraining or fine-tuning. Methods And Evaluation Criteria: The proposed method, SaP, indeed address the problem of learning safety constraints automatically for LLMs. Theoretical Claims: N/A Experimental Designs Or Analyses: The paper evaluates SaP’s ability to reduce attack success rates against seven adversarial attack methods, including gradient-based (e.g., GCG, GBDA) and human-crafted jailbreaks. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper address the gap of leverage token vectors to build a safety constraint for LLM. Essential References Not Discussed: The MDP modelling is a key concept of the paper. While there is some exploration work on modelling LLMs as MDP is missing: - Li, Kenneth, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. "Inference-time intervention: Eliciting truthful answers from a language model." Advances in Neural Information Processing Systems 36 (2023): 41451-41530. - Song, Da, Xuan Xie, Jiayang Song, Derui Zhu, Yuheng Huang, Felix Juefei-Xu, and Lei Ma. "Luna: A model-based universal analysis framework for large language models." IEEE Transactions on Software Engineering (2024). Other Strengths And Weaknesses: The scalability of the work is doubtable. When the dataset gets very large, I am not sure whether the learned safety constraints are still effective. Other Comments Or Suggestions: N/A Questions For Authors: How scalable is the proposed method? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and accurate summary of our paper. We appreciate your recognition that our "claims are supported by the evidence" and that "SaP indeed addresses the problem of learning safety constraints automatically for LLMs." **Regarding Scalability** We appreciate your question about the scalability of our approach. We acknowledge that polytope learning presents inherent scalability challenges, which is precisely why we adopted the Convex Polytope Machine (CPM) algorithm instead of traditional methods like QuickHull. This design choice was deliberate to address the computational complexity issues in high-dimensional representation spaces. Our empirical results demonstrate CPM's effectiveness at scale: On the BeaverTails dataset (330K examples), our method achieves performance comparable to MLPs with similar parameter counts (Appendix D.3, Table 12). For the HarmBench defense task with ~890K data points, SaP achieves an average safety classification accuracy of 91.3% and maintains attack success rates below 5%, further validating its effectiveness at scale. Based on these results, we expect CPM to perform robustly (on par with MLP) on datasets of similar or even larger scale. In Section 6, we discuss future directions for improving beyond CPM, including references to promising work like Hashimoto et al. (2023) on neural polytope learning. For practical applications, the inference-time steering mechanism remains efficient regardless of training data size, as it only requires a forward pass through the concept encoder followed by verification against the learned polytope facets. **Regarding Missing References** Thank you for suggesting the valuable references (Li et al., 2023 and Song et al., 2024) on MDP modeling for LLMs. We will incorporate these references in our revised manuscript to strengthen the discussion on using MDPs as a theoretical foundation for language model behavior. These papers provide important context for our approach to modeling language generation as a constrained MDP. We greatly appreciate your feedback and we are happy to address any further questions or comments. Having addressed your concerns, we would appreciate your feedback during this discussion period. Let us know if there are any further questions that we can clarify, otherwise, we would appreciate it if you would consider increasing your score.
Summary: The authors propose to map unsafe model responses to save model regions in representation space without adjusting the weights of the respective model. Specifically, they represent safety constraints via polytopes and filter responses by assessing the similarity of latent features to the learned polytope. The method restricts models from generating safe outputs while model capabilities are maintained. References for the remaining review: [1] Chao et. al., "Jailbreaking Black Box Large Language Models in Twenty Queries", 2023 [2] Andriushchenko et. al., “Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks”, 2024 [3] Schwinn et. al., "Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space", 2024 [4] Carlini et. al., "Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods", 2017 ## update after rebuttal Most of my concerns were addressed. I still believe it's highly relevant (even in the scope of non-adversarial evaluations) to assess the refusal/safety trade-off of this defense. XS-Test, for example, has only ~100 prompts and would give insights into the overrefusal behavior. I will followed the discussion and increased my score from 2 to 4 and recommend accepting this work. Claims And Evidence: The authors claim that their proposed algorithm can prevent harmful behavior in LLMs by steering latent activations associated to harmfulness to safe regions in the latent space. They back up their claims through a large-scale empirical study using 7 different adversarial attacks (excluding recent Sota attacks), three different LLMs, and two common harmfulness benchmarks. Moreover, they conduct several ablation studies to verify the contribution of individual design choices within their algorithm Methods And Evaluation Criteria: - The benchmarks datasets are widely used in the research field and appropriate - The used adversarial attacks are mostly weak (GBDA, PEZ , Human Jailbreak, Direct Request). Even GCG may be viewed as weak considering that multiple more efficient GCG variations have been proposed in the literature since the original paper. - The utility benchmark is not suitable. LLMs can achieve high scores on MMLU without being able to generate coherent sentences. - Measures for harmfulness evaluations are appropriate Theoretical Claims: N/A Experimental Designs Or Analyses: The results of the conducted experiments are consistent with the results reported in the literature. I did not find anything unusual. However, as described in Methods And Evaluation Criteria some of the experiments are not suitable to back up the claims of the paper. Supplementary Material: N/A Relation To Broader Scientific Literature: The authors position their paper appropriately within the related work. However, some references/comparisons with more recent adversarial attacks are missing e.g., [1, 2] Essential References Not Discussed: I am not aware of a paper that I would deem essential to discuss in the context of the presented work that is currently missing from the paper. Other Strengths And Weaknesses: **Strengths** - The authors propose a novel method to steer model activations from harmful regions to safe regions - The authors conduct several ablation studies to investigate the individual components of their mechanism **Between Strength and Weakness** - The authors use several adversarial attacks and conduct experiments on multiple models. However, used attacks are not sufficient **Weakness** - The authors only use MMLU to analyze model capabilities - The authors do not try to adaptively break their defense. Detecting adversarial examples (e.g., in latent space) has shown to be very difficult [4]. Adaptive evaluations are necessary to evaluate new defenses. Other Comments Or Suggestions: Text in Figure 4 is not readable. Can the authors increase the font size of the axis labels? Numbers are less relevant. Colorbar could be helpful as well. Questions For Authors: - Could the authors conduct evaluations with stronger and more recent attacks? - Could the authors conduct a sanity check with very strong continuous attacks if they are able to bypass their defense? A negative result would point towards errors in their evaluation [3] - Could the authors add more suitable utility evaluation benchmarks to investigate model capability? E.g., MT-Bench and over-refusal benchmarks # After the rebuttal Most of my concerns were addressed. I still believe its highly relevant (even in the scope of non-adversarial evaluations), to assess the refusal/safety trade-off of this defense. XS-Test, for example, has only ~100 prompts and would give insights into the overrefusal behavior. I will follow the discussion and am currently leaning toward increasing my score to three if no other major concerns are raised by the other reviewers. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your thoughtful review. We particularly appreciate your recognition of our paper's key strengths: - The "novel method to steer model activations from harmful regions to safe regions" - Our "several ablation studies to investigate the individual components of [our] mechanism" - The comprehensive evaluation across "several adversarial attacks" and "multiple models" **Clarification on Paper Focus and Contributions** We would like to first clarify that SaP is not primarily a jailbreak defense paper. Our key contribution is reformulating safety as a geometric constraint learning problem in representation space, which provides both interpretable insights (through facet specialization) and an effective control mechanism (which includes but is not limited to defense). This geometric perspective offers a principled framework for conceptualizing and controlling model safety behaviors. Throughout the paper, we consistently frame our approach in terms of "safety" and "constraints" in their general sense: 1. In Section 2, we discuss how "safe and ethical language not only tries to maximize a reward function, but is also subject to some constraints. For instance, humans naturally avoid using language that would hurt someone's feelings, incite harm, or solicit unlawful actions." 2. Our extensive experiments on BeaverTails (Sections 4.2-4.3) span 14 distinct safety categories, including animal abuse, discrimination, privacy violations, and misinformation—far beyond just jailbreak prevention. That said, we agree that stronger defense performance can better support our perspective of geometric constraint learning. We therefore conducted the additional experiments you requested. **Regarding Stronger and Adaptive Attacks** As you suggested, we have evaluated our defense against stronger and more recent attacks. In our response to Reviewer FoeD, we report results on AutoDAN and adaptive attacks. Please refer to it for detailed setting and discussions. Notably: 1. For AutoDAN, our method reduces ASR from 66.75% to 1.35% on Ministral-8B, from 1.77% to 0% on Llama2-7B, and from 45% to 0.80% on Qwen2-1.5B. 2. For adaptive attacks (on AdvBench), our method achieves 0% ASR for Llama2-7B by directly transferring our trained polytope and hyperparameter settings. With a slight increase in unsafe penalty, Ministral-8B can be steered from 100% ASR to 12%. **Regarding Utility Evaluation** > Reviewer concern: "Could the authors add more suitable utility evaluation benchmarks to investigate model capability? E.g., MT-Bench and over-refusal benchmarks" We agree that MMLU alone may not fully represent model capabilities. Following your suggestion, we conducted additional evaluations on MT-Bench: | **Model** | **First Turn Score** | **Second Turn Score** | | - | - | - | | Ministral-8B | 7.55 | 6.98 | | Ministral-8B + SaP | 9.01 | 7.71 | | Llama2-7B | 6.54 | 5.46 | | Llama2-7B + SaP | 6.98 | 6.21 | | Qwen-1.5B | 6.90 | 5.15 | | Qwen-1.5B + SaP | 6.85 | 5.06 | While the results suggest that models with SaP maintain performance and can generate coherent sentences without over-refusing on MT-Bench, we do not intend to claim that SaP improves performance. The slight differences observed might be due to the inherent noise in GPT-based evaluations and the limited sample size (80 evaluation examples). We manually inspected the outputs and did not find any nonsensical or rejection responses. The results are attached in the following URL: https://limewire.com/d/asA5e#Rlx4uqoGNp Regarding your suggestion to evaluate on benchmarks like OR-Bench, we see this as a valuable future direction. In our preliminary investigations with sensitive but benign prompts, we observe that SaP maintains high performance with minimal false positives. We would like to highlight again that the main contribution of this paper does not directly concern with proposing a new SoTA defense method, but rather to propose a new framework for explicit modeling of safety in LLMs. **Conclusion** We thank you for your constructive feedback. The additional experiments have strengthened our confidence in SaP's learned safety knowledge, which supports our central contribution: reformulating language model safety as a geometric constraint learning problem in representation space. We believe this geometric perspective offers a principled framework for conceptualizing and controlling model safety behaviors beyond just jailbreak defense. We hope our response addresses the reviewer’s remaining concerns, and given that we could address all the other concerns raised in the review, we would appreciate it if the reviewer would consider increasing our score. We are happy to answer any other open questions. Thanks again for your active engagement!
Summary: The paper introduces SaP, a post-hoc safety mechanism that defines a convex polytope in an LLM’s feature space. Using a Concept Encoder to disentangle safety-related features, it learns linear constraints that steer unsafe outputs into a safe region without retraining the model. Experiments show that SaP dramatically reduces adversarial attack success rates while maintaining overall model performance. Claims And Evidence: The claims are supported by a series of experiments on multiple LLMs, but I am not confident in some of them; please see the section for questions for the authors. Methods And Evaluation Criteria: it makes sense. Theoretical Claims: I have checked the math in the main paper, and it looks reasonable to me. Experimental Designs Or Analyses: I have some concerns regarding the experimental design/ evaluation; please see the questions for the authors section. Supplementary Material: I have reviewed the extra experiments in Appendix D. Relation To Broader Scientific Literature: The paper builds on work in constrained decision processes and safe RL by framing safety as linear constraints in LLM representations. It leverages ideas from convex optimization and interpretability research to propose a novel post-hoc safety mechanism. Essential References Not Discussed: all related works have been cited, and [1] from anthropic is also quite related to the topic shown in the paper (though it is released after the submission deadline, it could be added in the future version) [1] Sharma, M., Tong, M., Mu, J., Wei, J., Kruthoff, J., Goodfriend, S., ... & Perez, E. (2025). Constitutional classifiers: Defending against universal jailbreaks across thousands of hours of red teaming. arXiv preprint arXiv:2501.18837. Other Strengths And Weaknesses: Strengths: 1. introduce geometric safety constraints to detect harmful response. 2. post-hoc mechanism preserves model performance while dramatically reducing adversarial attacks. 3. provides interpretable safety facets, offering insights into distinct harmful content categories. Weaknesses: 1. the LLMs used in the paper are somewhat outdated; Llama 3.1 8B would be preferable to Llama2-7B, but I admit it is just a minor problem. 2. more discussion on the influence of the LLM's size on the proposed method is expected. 3. the paper only considers gradient-based attacks (which optimize for unreadable adversarial strings) and does not consider other attack methods like AutoDan, which optimize for readable adversarial strings. Other Comments Or Suggestions: the writting looks good Questions For Authors: Below is my main concern about this paper. If I have misunderstood anything, please correct me, and I would be happy to raise my score if the authors address these concerns well. From my perspective, the proposed method first trains the polytope based on the collected data (using only the last hidden representation of the final token). Once the polytope is obtained, during evaluation, it is applied at each token's position; if a token's representation is out of bounds, it is mapped back into the polytope. I then have several questions: (1) **Overfit Problem**: when training the polytope using gradient-based methods such as GCG, there is a concern that it might overfit to the specific attack pattern rather than learning robust safety constraints. GCG typically generates adversarial suffixes that are unreadable and have high perplexity. As a result, the hidden representations capture these characteristics through self-attention, and the polytope may rely on them for safe/unsafe classification instead of truly learning generalized safety constraints. This raises questions about defense transferability: if the polytope is trained on gradient-based attacks, will it effectively defend against other attack types such as human-crafted jailbreaks, AutoDan [1], or rule-based attacks [2], and vice versa? (2) **Refusal Pattern After Training**: Building on the previous point, I am surprised that simply mapping the representation back to the trained-internal polytope—as outlined in Algorithm 1—can prevent harmful responses. However, the paper does not provide examples of what the resulting refusal responses look like. Will these responses resemble the traditional refusal pattern, such as “I can’t…”? I am curious about the LLM's response style when the representation is mapped back, and it also raises the question of whether enforcing the safety constraint on only the initial tokens is sufficient to prevent attacks, that will improve the efficiency and reduce false positive rate. (3) **Unfair Baseline Comparison**: SaP is trained using the top 3 most effective attacks, which are also included in the evaluation. This means that the most effective attack patterns are already "leaked" during training for SaP but not for the other baselines. I recommend that the authors include held-out attacks—such as AutoDan [1] or rule-based attacks [2]—to ensure a fairer comparison. (4) **Simple MLP may also work**: Following from (3), if one were to directly train a simple MLP for binary classification on the hidden representations using the representation from the same top 3 attacks, I doubt it could achieve a much lower ASR than SaP while maintaining standard MMLU accuracy. The motivation comes from [4]. (5) **MMLU does not prove that standard performance is maintained**: It is not surprising that MMLU performance does not drop because the prompts in MMLU are benign and do not include topics like fraud, hate, or violence as seen in HarmBench. In such cases, the polytope—or even a simple MLP—can easily learn and separate these patterns. In my view, it would be more effective to demonstrate that the false positive rate remains low when the LLM with the polytope is prompted with benign questions involving sensitive topics such as fraud, hate, or violence. This would show whether the polytope mistakenly rejects benign responses involving sensitive topics and prove that it is enforcing constraints based on the harmfulness of the response rather than merely the topic. [1] Liu, X., Xu, N., Chen, M., & Xiao, C. AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models. In The Twelfth International Conference on Learning Representations. [2] Andriushchenko, M., Croce, F., & Flammarion, N. (2024). Jailbreaking leading safety-aligned llms with simple adaptive attacks. arXiv preprint arXiv:2404.02151. [3] Zou, A., Phan, L., Chen, S., Campbell, J., Guo, P., Ren, R., ... & Hendrycks, D. (2023). Representation engineering: A top-down approach to ai transparency. arXiv preprint arXiv:2310.01405. [4] Sharma, M., Tong, M., Mu, J., Wei, J., Kruthoff, J., Goodfriend, S., ... & Perez, E. (2025). Constitutional classifiers: Defending against universal jailbreaks across thousands of hours of red teaming. arXiv preprint arXiv:2501.18837. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful analysis and valuable feedback. We would like to first clarify that SaP is not primarily a jailbreak defense paper. Our key contribution is reformulating safety as a geometric constraint learning problem in representation space, which provides both interpretable insights and an effective control mechanisms. Throughout the paper, we consistently frame our approach in terms of "safety" and "constraints" in their general sense. Please see our reply to Reviewer aHnQ for an expanded explanation on this. **Specific Responses to Concerns** > "Overfit Problem" We thank you for this insightful comment. We agree that only training on GCG will cause SaP to overfit to its specific attack pattern. However, this is not directly a weakness of SaP, as it can be easily mitigated by training it with data collected from diverse sources. This is to be expected, as machine learning models, SaP being no exception, are generally only as good as their training data (no free lunch). To address the concern about overfitting to specific attack patterns, we conducted additional experiments with attacks not seen during training. The polytopes used in the following experiments are the same ones we trained for reporting the main results in the paper, with no access to any new information. AutoDAN ASR (5 seeds): | **Method** | **Ministral-8B** | **LLaMA2-7B** | **Qwen2-1.5B** | | - | - | - | - | | Original | 66.75 | 1.77 | 45 | | Original + SaP | 1.35 ± 0.49 | 0.00 ± 0.00 | 0.80 ± 0.82 | Adaptive attack: | **Model Configuration** | **Attack Success Rate (%)** | | - | - | | Llama2 Original | 100 | | Llama2 + SaP | 0 | | Ministral Original | 100 | | Ministral + SaP | 98 | | Ministral + SaP (increased λ) | 12 | | Qwen2 Original | 100 | | Qwen2 + SaP | 88 | The adaptive attack experiments were conducted on AdvBench, following the original released code implementation, which contains requests not seen in HarmBench. To the best of our knowledge, three papers to date report defense performance on adaptive attacks, and the most comparable result is from Yi et al., 2025, where they also train their defense methods on HarmBench and test on AdvBench for adaptive attacks. Their best reported performance across 9 defense methods on Llama3-8B is 61% ASR (Table 2 of Yi et al.), while our method achieves 0% ASR for Llama2-7B. For Ministral, we found that a direct transfer of SaP hyperparameters trained on HarmBench doesn't defend well against adaptive attacks (98% ASR), but increasing $\lambda$ (the unsafe penalty) can substantially reduce this to 12% ASR. For Qwen2, we discovered that adaptive attack learns to induce harmful responses in Chinese, which our English-trained SaP can't effectively counter. This highlights an interesting direction for future work on multi-lingual adversarial defense through geometric constraints. > "Refusal Pattern After Training" Throughout numerous adversarial inputs, the polytope learns to steer models toward safety in a nuanced way. We observe that it guides models to incorporate common refusal phrases like "Sorry", "can't answer this question", "I'm just an AI", etc. We are happy to include qualitative examples in the revised manuscript, if requested. > "Unfair Baseline Comparison" We appreciate this concern. Our additional experiments with AutoDAN and adaptive attacks address this by testing on both attack types and adversarial requests not seen during training (generalization from HarmBench to AdvBench). These results strongly support the generalization capabilities of SaP beyond specific patterns used in training. > "Simple MLP may also work" From our experiments on BeaverTails, we do see that SaP performs similarly to an MLP with comparable parameter count. However, rejecting based on binary classification might result in over-rejection, which we experimented with and reported in Figure 2, Rejection Sampling (RJ). It reduces Ministral's MMLU performance from 63.4% to 28.5%. Furthermore, polytope constraints offer interpretability benefits that are not possible with a standard MLP, as discussed in our paper's Section 4.2. > "MMLU does not prove that standard performance is maintained" Regarding your suggestion to evaluate on benchmarks like OR-Bench, we see this as a valuable future direction. In our preliminary investigations with sensitive but benign prompts, we observe that SaP maintains high performance with minimal false positives. We would like to highlight again that the main contribution of this paper does not directly concern proposing a new SoTA defense method, but rather to propose a new framework for explicit modeling of safety in LLMs. We thank you once again for your feedback and hope our responses adequately address your concerns. We'd be happy if you would consider revising our score, and we are happy to answer any other open questions. Thanks again for your feedback and active engagement in the review process. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. Most of my concerns have been addressed; however, I have a few follow-up questions regarding the “Simple MLP may also work” part: 1. **Scalability:** You mentioned that SaP performs similarly to an MLP when both have comparable parameter counts. Considering that MLPs can be scaled up relatively easily—by using additional layers and larger hidden dimensions (even in this case, the total number of the parameters for it will still not be huge as it's just an MLP) —how would a larger MLP perform relative to SaP? Additionally, can SaP also be scaled up effectively to provide further performance improvements? 2. **Over-Rejection in Binary Classification:** The response noted that rejecting based on binary classification can lead to over-rejection, yet the overall performance between the MLP and SaP remains similar. This seems somewhat contradictory. Besides, MLP can still have the flexibility to implement a workflow similar to that described in Algorithm 1: For instance, if MLP($\bar{\pi}_l(\boldsymbol{x})$) predicts that the input is safe, we could simply use $\bar{\pi}_l(\boldsymbol{x})$; if it predicts unsafe, we might employ a binary search between 0 and $\bar{\pi}_l(\boldsymbol{x})$ to identify a point $h$—closest to $\bar{\pi}_l(\boldsymbol{x})$—where MLP($h$) predicts safe (given that MLP(0) is definitely safe, such a point must exist). In this way, it performs similarly with SaP, and there will be no over-rejection. This is just a simple strategy, and there are more complex ways to achieve it, as the problem here is analogous to finding the decision boundary of an MLP's prediction—a topic that has been widely explored in previous literature, especially in the context of boundary-based attacks. But I agree that in this case, the Interpretability may be lost, I am just curious about the performance. 3. **Interpretability vs. Complexity:** You highlighted that the polytope constraints in SaP offer interpretability benefits that a standard MLP cannot provide. However, these constraints might also limit the model’s capacity, potentially resulting in reduced performance. That’s why I brought up the MLP—to understand how much performance is sacrificed in SaP in exchange for interpretability. In other words, when using a standard, unconstrained MLP, what is the performance gap compared to SaP? I appreciate any insights you can provide on these points! --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful questions about SaP compared to MLPs. References: [1] Gao, Leo, Tom Dupré la Tour, Henk Tillman, Gabriel Goh, Rajan Troll, Alec Radford, Ilya Sutskever, Jan Leike, and Jeffrey Wu. "Scaling and evaluating sparse autoencoders." (2024). [2] Kantchelian, Alex, Michael C. Tschantz, Ling Huang, Peter L. Bartlett, Anthony D. Joseph, and J. Doug Tygar. "Large-margin convex polytope machine." Advances in Neural Information Processing Systems 27 (2014). **On Scalability:** > [...] scaled up for performance improvements For scaling without preserving interpretability, one could simply add more layers to the concept encoder. One can see the model as being the same as an MLP until the second last layer, and change the MLP's last layer from linear to a polytope. In our experiments, adding one layer (the concept encoder) significantly improves the polytope performance compared to versions without it. For scaling while maintaining interpretability, besides adding more layers, one might also need to scale the sparse autoencoder (SAE) architecture. This is possible from the line of SAE scaling research such as [1], which explores designs of SAE that can be effectively scaled to handle complex models while preserving their interpretability advantages. We believe scaling could improve interpretability while maintaining safety and capability. Since our work proposes a new framework for safety in LLMs, our focus is to demonstrate that, under the simplest modifications, one could already balance safety and capability, and we welcome future research on scaling it up for more complex problems. **On Over-Rejection:** > [...] lead to over-rejection, yet the overall performance [...] remains similar. When we noted similar performance between MLP and SaP, we were referring to classification accuracy. Unlike binary classification that leads to over-rejection by treating all unsafe predictions equally, SaP takes into account the magnitude of constraint violations during steering. This allows for minimal modifications to outputs that might be just slightly over the safety threshold, rather than rejecting them entirely. This difference is evident in Figure 2, comparing rejection sampling with SaP on MMLU. > Implement a workflow similar to Algorithm 1 The workflow you described is precisely our experiments in Figure 6, where we compare 1-facet polytopes (i.e., MLP) with polytopes with more facets. MLP generally underperforms multi-facet versions. This aligns with classic results comparing max-margin SVMs (single decision boundary) with polytopes (multiple boundaries). By [2], under the same feature space, polytope provides a greater modeling capacity and can create larger decision margins. Leveraging the polytope's large-margin nature is an interesting future research direction in studying against bounded attacks. For more detailed discussions on max-margin SVM vs. polytopes, please refer to [2]. Regarding MLP performance, please see the results below. **On Interpretability vs. Complexity:** > Performance gap compared to MLP? As we discussed, we do not observe performance degradation in SaP. On BeaverTails, it performs on par with an MLP with similar parameter count. To address your question about a standard MLP, we experimented with these steps: (1) Train an MLP based on BCE loss with safety labels. (2) Use the same algorithm as our steering method (Algorithm 1), replacing the polytope with MLP. We experimented with AutoDAN and adaptive attack. The MLP we experimented with has a comparable parameter count to SaP, and we additionally experimented with a version where we added 5 extra layers. AutoDAN ASR: | **Method** | **Ministral-8B** | **LLaMA2-7B** | **Qwen2-1.5B** | | - | - | - | - | | Original | 66.75 | 1.77 | 45 | | SaP | 1.35 ± 0.49 | 0.00 ± 0.00 | 0.80 ± 0.82 | | MLP | 20 | 0.25 | 1.5 | | MLP + 5 layers | 20 | 0.25 | 1.25 | Adaptive Attack ASR: | **Method** | **Ministral-8B** | **LLaMA2-7B** | **Qwen2-1.5B** | | - | - | - | - | | Original | 100 | 100 | 100 | | SaP | 12 | 0 | 88 | | MLP | 100 | 46.81 | 100 | | MLP + 5 layers | 100 | 0 | 100 | On AutoDAN, both MLP versions can perform reasonably on Llama2-7B and Qwen2-1.5B (though worse than SaP), but their Ministral performance is far worse than SaP. For adaptive attacks, while deeper MLPs can match SaP performance on LLaMA2-7B, they fail on Ministral and Qwen where SaP maintains some robustness. This suggests SaP's geometric modeling offers advantages that cannot be easily replicated by simply scaling MLPs by a few layers. These experiments reinforce our paper's primary contribution of introducing a geometric perspective that naturally disentangles safety concepts while offering effective model control mechanisms (which are not limited to defense). Thank you for raising these questions. We believe we have addressed your concerns and would appreciate it if you could consider raising your score based on our responses.
Summary: The paper presents a novel approach to increase safety and the adversarial robustness of LLM. Instead of fine-tuning the parameter of the model for safety alignment, the introduced approach SaP (Safety Polytope) is applied during inference by enforcing linear safety constraints using Convex Polytope Machines in the model's representation space or—inspired by sparse autoencoders—a projection (the concept encoder) of the model representation to a higher but sparse dimensional space. By steering the activation subsequently of detecting the representation of a unsafe concept SaP influence the next token prediction during the sampling process. Claims And Evidence: The paper's primary claim is that SaP provides an inference-time safety mechanism for LLMs that increases safety while maintaining model accuracy. This is supported by: 1) Empirical evidence on the harmbench dataset while applying adversarial attacks to elicit unsafe behaviour and simultanously evaluating maintaining general LLM performance using MMLU. Considering both SaP outperforms the selected baselines. 2) Further, the paper provides an interpretability analysis demonstrating that different safety constraints specialize in detecting specific types of unsafe content. However, the empirical evaluation lacks a comparison to related inference time steering methods such as 1) steering vectors (see [1] and [2] or see [3] for an overview of other related detection/steering approaches) and 2) especially SAE which the presented approach even draws inspirations from. **References** [1] Wang et al. Model Surgery: Modulating LLM's Behavior Via Simple Parameter Editing. (2024): https://arxiv.org/abs/2407.08770v1 [2] Lee et al. A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity (ICML 2024): https://arxiv.org/abs/2401.01967 [3] https://arxiv.org/pdf/2501.17148 While the benchmark was probably released after the ICML submission deadline, the paper provides an overview of (safety) detection/steering algorithms related to the presented method. Methods And Evaluation Criteria: Yes, the choice of evaluation datasets makes sense for the type of assessment. The authors evaluate the introduced approach on the harmbench dataset while applying a range of adversarial attacks to elicit unsafe behavior. Additionally, the MMLU benchmark is used to demonstrate that the general performance of the model is maintained. Further, the introduced approach is compared against five rejection baselines to demonstrate the advantages of SaP. However, as described above, more related inference-time approaches, such as steering vectors or SAEs, should be considered. Theoretical Claims: The paper defines LLM safety as a CMDP problem, where constraints define safe and unsafe regions in the representation space. The claims are supported by references to prior work. Experimental Designs Or Analyses: In general, the experiments are well-structured, though additional comparisons with related safety methods would strengthen the evaluation. Specifically, the experimental section evaluates SaP on three LLMs, namely Llama2-7B, Ministral-8B, and Qwen2-1.5B. Additionally, ablations on the impact of the concept encoder and number of safety constraints are conducted. Supplementary Material: I briefly read the supplementary material and specifically reviewed C.2 to get more details about the implementation of the baselines. Unfortunately, the description of the baselines is very sparse. Further, contradictory to the experimental setup description in the main text: "Each defense algorithm is evaluated on all 7 attack methods over 5 seeds“, Section C.2 states that "results (mean ± standard deviation) are obtained over 5 seeds, and other baselines’ results are obtained over 1 seed“ Could you clarify the number of seeded runs of the different experiments and why you chose a different number of runs for the baselines? Relation To Broader Scientific Literature: The paper is well-grounded in prior literature on LLM safety, and specifically, Section 5 describes the relation to prior literature quite well. However, even if inspired by SAEs, a comparison is missing. Further, the paper lacks a discussion on the relation to other recent inference-time steering methods, as mentioned above. Essential References Not Discussed: See above. Other Strengths And Weaknesses: **Other Strengths** - Limitations and future work are well discussed. **Other Weaknesses** - While the authors state that "our approach scales efficiently to large batches via existing tools for vectorized computation“ the computational overhead during inference is unclear. Therefore some uncertainty of the approach’s practability remains. A additional analysis of the computational overhead during inference would strength the paper. Other Comments Or Suggestions: I suggest considering an additional comparison to the above mentioned safety methods. Questions For Authors: - See comment in supplementary material section: Could you clarify the number of seeded runs of the different experiments and why you chose a different number of runs for the baselines? - Could you provide an overview of the additional computational overhead during inference? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful analysis and constructive feedback. We appreciate your recognition of our paper's key contributions: 1. The novel geometric approach to LLM safety through representation space constraints. 2. The effectiveness of SaP in defending against adversarial attacks while maintaining model capabilities 3. The interpretability insights provided by our analysis showing how different safety constraints specialize in detecting specific types of unsafe content. Below, we address your specific questions and suggestions: **Regarding experimental methodology:** > Question: "Could you clarify the number of seeded runs of the different experiments?" We apologize for the confusion. This is indeed a typo in Appendix C.2. All methods (including baselines) were evaluated over 5 seeds. As shown in Tables 1-4 in the appendix, we report means and standard deviations calculated over 5 seeds for all methods. We will correct this inconsistency in the revised manuscript. **Regarding computational overhead during inference:** > Question: "While the authors state that 'our approach scales efficiently to large batches via existing tools for vectorized computation' the computational overhead during inference is unclear. Therefore some uncertainty of the approach's practicability remains." Thank you for raising this important point. We have conducted additional experiments to quantify the inference cost of SaP. First, we measured the average per-token processing time with and without SaP across all three model architectures: | **Model** | **Configuration** | **Avg Time per Token (s)** | **Overhead** | | ------------ | ----------------- | -------------------------- | ------------ | | Llama2-7B | with SaP | 0.0301 | +29% | | Llama2-7B | without SaP | 0.0234 | - | | Ministral-8B | with SaP | 0.0381 | +8% | | Ministral-8B | without SaP | 0.0353 | - | | Qwen-1.5B | with SaP | 0.0309 | +35% | | Qwen-1.5B | without SaP | 0.0228 | - | While SaP adds approximately 8-35% overhead at the per-token level, this efficiency advantage becomes even more significant when comparing end-to-end runtime for practical tasks. For the MMLU benchmark, all runs conducted on A100 40GB GPUs via clean slurm jobs with no other program interference. For these baselines, we use the implementation from the llm-jailbreaking-defense [benchmark](https://github.com/YihanWang617/llm-jailbreaking-defense). | **Method** | **Total Runtime** | **Relative to Baseline** | | -------------------------- | ----------------- | ------------------------ | | Llama2-7B (baseline) | 24 min | 1.0× | | Llama2-7B + SaP | 38 min | 1.6× | | Llama2-7B + ICL | 11h | 27.5× | | Llama2-7B + Response Check | 11h 17min | 28.1× | | Llama2-7B + Self Reminder | 10h 42min | 26.7× | | Llama2-7B + SmoothLLM | 11h 35min | 28.8× | These results demonstrate that SaP provides a dramatically more efficient safety mechanism compared to prompt-based alternatives, requiring only 1.6× the baseline runtime while other methods require 26-29× more time. This substantial efficiency advantage stems from SaP's direct manipulation of model representations rather than relying on multiple forward passes or additional inference steps that characterize most prompt-based approaches. **Regarding additional comparisons with related methods:** > Concern: "[...] lacks a comparison to related inference time steering methods such as 1) steering vectors and 2) especially SAEs." We appreciate this valuable suggestion. Our concept encoder indeed shares implementation similarities with Sparse Autoencoders (SAEs), and we view the polytope constraint mechanism as offering a complementary geometric perspective compared to steering vectors. We agree these comparisons would strengthen our paper and plan to: 1. Provide a more thorough analysis of how our concept encoder relates to and differs from SAEs and steering vectors 2. Incorporate the suggested references and Wu et al. (2025) in our revised manuscript 3. Expand our discussion of related inference-time approaches to better position our work within this literature We believe these additions will address your concerns and enhance the paper's contribution to the field of LLM safety. Having addressed your concerns, we would appreciate your feedback during this discussion period. Let us know if there are any further questions that we can clarify, otherwise, we would appreciate it if you would consider increasing your score.
null
null
null
null
null
null
Faster Global Minimum Cut with Predictions
Accept (poster)
Summary: This paper investigates how predictions can be used to improve the running time of classic algorithms for the global minimum cut problem. Given a weighted graph $G$, the classic algorithm of Karger repeatedly selects edges randomly, proportionally to their weights, and contracts them until two vertices remain, which define the final cut. In this paper, the authors propose a prediction-augmented framework that modifies Karger’s algorithm by boosting the weights—and consequently the contraction probabilities—of edges that are predicted not to be part of the minimum cut. The algorithm operates by defining a threshold parameter $t$: while the graph has at least $t$ vertices, edge contractions are performed based on the boosted weights, after which the algorithm switches back to the classic Karger algorithm for the remaining steps. A similar boosting strategy is applied to the FPZ algorithm of Fox et al. (SODA 2019) to improve the running time of the Karger-Stein algorithm through prediction-guided edge contraction. The paper provides theoretical guarantees showing that, with good predictions, the learning-augmented algorithms recover the minimum cut with high probability, resulting in improved runtime. The analysis characterizes performance in terms of the total weight of false negative edges (respectively, false positive edges) over the total weight of edges predicted to be in the min-cut. These ratios are denoted with $\eta$ for false negatives (where $\eta \in [0,1]$) and $\rho$ for false positives (where $\rho$ could be arbitrarily large). The authors also show how effective prediction vectors can be learned from a distribution over graphs with a sample of polynomial size (hence establishing PAC guarantees for random graphs). Empirically, the proposed algorithm demonstrates speedups over the classical Karger algorithm, even when predictions are moderately noisy. The experimental evaluation includes tests on synthetic graphs, minimum cut instances derived from TSP LP relaxations, and real-world graph datasets. Claims And Evidence: Yes, the main claims of the paper are supported by clear and convincing evidence. The theoretical results are proven and establish how prediction accuracy—quantified through false-positive and false-negative ratios—impacts the success probability and runtime of the modified algorithms. The framework is carefully analyzed for both Karger’s and the Karger-Stein algorithms (via the FPZ framework). Methods And Evaluation Criteria: Yes, I find the proposed methods and evaluation criteria to be appropriate. The paper focuses on improving the runtime of classical global minimum cut algorithms using predictions, and the evaluation criteria are natural and well-justified. The experimental inputs include synthetic graphs, structured instances derived from TSP LP relaxations, and real-world graph datasets, which are reasonable benchmarks in combination. Together, the methods and evaluation setup effectively support the paper’s goal of combining theoretical improvement with empirical applicability. Theoretical Claims: Yes, I checked the structure and reasoning of the key theoretical results. The arguments seem correct to me, and the proofs follow standard techniques in randomized algorithm analysis. I also reviewed the use of false-positive and false-negative rates in the survival probability analysis and found the reasoning to be clear and interesting. While I did not verify every step in full detail, I found no issues in the high-level logic or conclusions in the proofs. Experimental Designs Or Analyses: Yes, I reviewed the experiments, which mainly compare the proposed prediction-augmented Karger algorithm to the classical Karger’s algorithm without prediction across a range of instances. The evaluation criteria and results are appropriate and aligned with the theoretical claims. However, a notable limitation is that the experimental results focus exclusively on the prediction-augmented variant of Karger’s algorithm, while the theoretical analysis also covers the Boosted FPZ algorithm (which is absent in the experiments). Including experiments for the latter would have improved the experiments. In addition, all graphs in the experiments are unweighted, while I believe the weighted case, where weights are highly skewed, is more aligned with worst-case scenarios. Finally, while less critical, it would have been helpful to include comparisons against recent state-of-the-art algorithms for global minimum cut, such as the algorithm by Henzinger et al. (SODA 2024). Supplementary Material: Yes, I reviewed the supplementary code and the presented datasets. While I did not run the code or attempt to replicate the results, the implementation appears to be correct and aligned with the description in the paper. Relation To Broader Scientific Literature: The contributions of this paper lie at the intersection of classical randomized algorithms for graph problems and the more recent field of learning-augmented algorithms, which has attracted significant attention in recent years. The paper builds upon Karger’s algorithm for global minimum cut and the FPZ algorithm of Fox et al. (SODA 2019), which extends the Karger-Stein approach. The main novelty is the introduction of prediction-guided edge contraction, where edge weights are adjusted based on predicted cut membership, and a rigorous analysis of how this modification affects the algorithm’s success probability and runtime. Given the foundational importance of the minimum cut problem in both theory and practice, this contribution is valuable. The work can also be compared to the recent improvements in exact global min-cut algorithms, such as the algorithm by Henzinger et al. (SODA 2024), which represents the current state of the art. While those advances focus on intrinsic algorithmic improvements, this paper explores a complementary direction—namely, runtime acceleration via auxiliary prediction information. That said, the paper could be strengthened by providing a theoretical comparison between the two boosted algorithms presented (i.e., the boosted Karger and boosted FPZ variants), as well as a theoretical and empirical comparison with recent state-of-the-art algorithms without predictions. Finally, the paper connects conceptually to the broader literature on algorithms with predictions, particularly the line of work initiated by Lykouris and Vassilvitskii (2021) and others, which studies performance guarantees under prediction errors. However, unlike most work in this domain, this paper does not offer formal robustness guarantees, which is an important issue that could be acknowledged and discussed more clearly. Essential References Not Discussed: I did not identify any missing references essential to understanding the paper's contributions. Recent work in exact minimum cut algorithms, in particular, the algorithm by Henzinger et al. (SODA 2024), represents the current state-of-the-art algorithms without predictions. Although this paper is orthogonal in motivation, adding a short note about its details would help place the proposed contributions in a broader context. Other Strengths And Weaknesses: - One potential weakness, which is acknowledged in the paper, is that the proposed algorithm requires knowledge of the prediction error parameters, particularly the false-positive ratio $\rho$, to set the threshold parameter $t$ for the theoretical results to hold. - The algorithms are not robust in the following sense. The false positive ratio $\rho$ can be arbitrarily large (it is decided by the weight of the edges that is not bounded as a function of $n$). If one knows $\rho$, which is somehow assumed (see the point above), the algorithm can become robust by ignoring the predictions when $\rho$ is high. But I think knowledge of $\rho$ is not given in adversarial scenarios (maybe one can justify an upper bound for $\rho$. - Experiments only consider the Boosted Karger algorithm and the unweighted graphs. Other Comments Or Suggestions: - Consider adding two plots to illustrate how the theoretical performance of the two algorithms compares. In one plot, you could fix $\rho$ (the false-positive rate) and vary $\eta$ (the false-negative rate); in the other, fix $\eta$ and vary $\rho$. This would help compare the two algorithms, visualize the trade-offs, and provide clearer insight into the relative strengths of the two approaches. - Add a note on how the algorithms can become robust if $\rho$ is known (see above). - As a minor writing suggestion, some of the notation and variable names in the theoretical sections (e.g., survival probabilities, prediction error parameters) could be introduced more formally, with additional intuitive explanations to improve readability and accessibility. - Consider rewriting Theorems 1.1 and 1.2 in a more consistent form, possibly expressing both results in terms of running time, to make the comparison between the two algorithms clearer and easier. Questions For Authors: 1- Can similar theoretical results be obtained if the values of $\rho$ and $\eta$ are unknown, but upper bounds on them are available? If so, it would be helpful to include a brief discussion of this scenario in the paper. 2- One motivation for augmenting Karger’s algorithm and its variants with predictions is that they are highly parallelizable and thus attractive for practical use. Is it correct that the algorithm of Henzinger et al. (SODA 2024) is not similarly parallelizable? If so, this distinction could be highlighted more explicitly to strengthen the motivation for your approach. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the helpful comments and suggestions. We would like to highlight the following points: + **Knowing parameters:** Our theoretical guarantees indeed assume knowledge of $\rho, \eta$. However, the same analysis applies when only upper bounds on these parameters are available: simply replace $\rho, \eta$ in the analysis with those upper bounds. We will add a discussion in the paper clarifying that the results remain valid under this relaxed assumption. + **Comparing to other cut algorithms:** This paper is the first to demonstrate how learned predictions can speed up global min-cut algorithms, providing a proof of concept for this new direction. While advanced deterministic or near-linear-time algorithms (e.g., Henzinger et al. (SODA 2024)) exist, they often involve overheads that limit their usability in practice. In fact, as we point out on page 2, due to such overheads, the two most popular network algorithm libraries implement minimum cut algorithms which in theory are far slower. In contrast, Karger’s algorithm remains popular in many graph libraries due to its simplicity and strong parallelization potential. By incorporating learned predictions into Karger’s and Karger-Stein’s algorithms, we show how one can achieve meaningful runtime improvements in realistic settings without sacrificing their desirable practical properties. We will further expand on this point in the final version, highlighting the potential for future extensions of this prediction-augmented framework to other cut algorithms. + **Parallelism:** Karger’s algorithm (and, by extension, our Boosted Karger variant) is known for its high parallelizability, making it appealing for large-scale distributed settings. We will emphasize in the revised manuscript that Henzinger et al. (SODA 2024), while state-of-the-art in terms of asymptotic complexity, does not share the same level of parallelization potential. This distinction underscores one advantage of our learning-augmented framework. + **Runtime of Boosted Karger vs Boosted FPZ:** We remark that one theorem refers to the success probability and the other refers to the run time. Therefore, the net runtime of Boosted Karger’s under the requirement that it succeeds with constant probability is $O(n^{2\eta}\rho^{2(1-\eta)}m)$, since each trial of Karger’s takes $O(m)$ time. Thus, mirroring comparisons between Karger’s and Karger-Stein, Boosted FPZ is always better in terms of sequential runtime. However, Boosted Karger’s like the original Karger’s remain highly parallelizable. We will be happy to make this point explicitly upon revision. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I found the results in the paper compelling and have updated my current score.
Summary: Algorithms for boosting minimum cut algorithms with predictions are studied. The authors propose two methods: the boosted Karger’s algorithm and the boosted Karger-Stein method. These methods rely on predictions from a machine learning model to guarantee multiplicative improvements in runtime (Theorems 1.1 and 1.2). Additionally, the authors describe how, given samples from a fixed random graph model, to efficiently find predictions that minimize an upper bound on the expected runtime of the boosted Karger’s algorithm. Empirical results on synthetic and real datasets demonstrate the efficacy of the boosted mincut algorithms over their non boosted counterparts when predictions are provided. A practical application to accelerating LP TSP relaxations via subtour elimination is also demonstrated as well as three real network datasets. Claims And Evidence: The authors study two classic minimum cut algorithms augmented with predictions. Several theorems characterizing the runtime of these algorithms in terms of the quality of predictions are provided. Additionally, an algorithm to learn optimal predictions (with respect to the runtime of the proposed prediction-augmented cut algorithms) from data is designed and its runtime is characterized. These bounds are justified both via proof and empirical evidence on a variety of synthetic and real datasets. Methods And Evaluation Criteria: The proposed evaluation criteria (synthetic random bipartite graphs, TSP problems, and real graph benchmarks) are reasonable. Theoretical Claims: I have checked results and proofs in the main text and results in the appendix related to the learning algorithm. Experimental Designs Or Analyses: Empirical evidence includes experiments on synthetic and real datasets, demonstrating the efficacy of boosted algorithms over their non boosted algorithms over a range of controlled predictions qualities Supplementary Material: I have reviewed section A2 in the appendix. I have not reviewed the proofs in section A1. Relation To Broader Scientific Literature: Research into how predictions may inform classic algorithms has important implications in many fields. This paper serves as a good benchmark for how a thorough analysis might be conducted. Essential References Not Discussed: I am not aware of missing relevant citations. Other Strengths And Weaknesses: Strengths: The authors study variations of two min-cut algorithms with prediction advice and provide a novel analysis and suite of results to characterize how the quality of the predictions may be used to augment minimum cut algorithms. The topic is an interesting and important, and this paper serves as a nice contribution. The authors demonstrate how one may learn predictions to minimize the expected runtime of the two algorithms. Empirical evidence on synthetic and real benchmarks is provided to demonstrate how predictions may assist. The paper well-written. The analysis is comprehensive and technically correct. Code is provided, which is appreciated. Weaknesses: The experiments are somewhat limited in scope. I would expect some empirical investigation of how the learning algorithm performs and associated bounds on the expected runtime. Other Comments Or Suggestions: "predictioas" typo in conclusion Questions For Authors: I have no pressing questions at this time. I would be happy to see this paper in icml. Although I would be happy to see empirical performance on a wider range of random graphs (e.g. other clustered random graphs), and an empirical demonstration of the k-cut Kargers algorithm. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thorough review. We see our work as a first step in speeding up a fundamental combinatorial optimization problem, and we certainly agree that extensively assessing and demonstrating the empirical performance improvements of prediction-augmented algorithms for such problems is both warranted and a valuable avenue for future work.
Summary: This paper presents an adaptation of two randomized algorithms for mincut to take into account predictions about whether specific edges appear in a minimum cut. For the simpler of the algorithms, the change consists in simply making a randomized choice of edge by weighing the edges by the prediction. The paper proves the modified algorithm has improved probability of finding a minimum cut, under generous assumptions for the error in the prediction. It also proves that it is possible to learn such predictions with small error and small sample size. An experimental evaluation shows more specific data on the performance of these algorithms in practice. Claims And Evidence: The claims are well supported, both theoretical and practical Methods And Evaluation Criteria: I had doubts about the setting of graphs with the exact same set of vertices, but the scenario of using this in the context of solving a TSP made sense. Theoretical Claims: I have not checked the proofs in detail. Experimental Designs Or Analyses: I found no issues. Supplementary Material: I have not read them Relation To Broader Scientific Literature: The paper follows a large body of work (correctly cited) on using predictions to improve existing algorithms. Essential References Not Discussed: None that I know Other Strengths And Weaknesses: I liked the writing: factual, honest with respect to drawbacks, to the point. Other Comments Or Suggestions: none Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review, and for appreciating our transparent writing style.
Summary: The paper studies global minimum cut with predictions. The problem is given a weighted graph $G$, find a partition of the vertices $S, V \setminus S$ that minimizes the total weight of edges crossing the cut. Without predictions, there are two main baselines: first a naive version of Karger's minimum cut algorithm which runs in $\tilde{O}(n^2)$ time and succeeds with probability $\approx 1/n^2$. The second is a smarter version of Karger's algorithm that runs in overall $\tilde{O}(n^2)$. The authors present improvements on both of these algorithms using an oracle which gives a predicted set of edges constituting the minimum cut (the authors generalize this to the case where the predictions give a fraction for every edge being in the minimum cut). The two main notions used to measure the error of the predictions are false positives and false negatives (weighted fraction of edges which are incorrectly stated to be in min cut compared to opt vs weighted fraction of edges which are erroneously left out compared to opt). Interestingly the two types of errors have different influences on the final error. The most interesting regime seems to be for sparse graphs. Assuming that false positives are $o(n)$ fraction of the actual min cut, the submission is able to get a near linear time bound. The main, and quite natural, idea is to boost the probability of picking edges that are not predicted to be in the min cut in the contraction steps of Karger's algorithm. Claims And Evidence: Yes, the theorem statements seem to have correct proofs. Methods And Evaluation Criteria: I have some slight issues with the experiments, see below. Theoretical Claims: I did not check too closely but the main ideas made sense to me. There is one minor question: Should theorem 1.2 have a $+ O(m)$ factor in the running time since we need to read the graph? Experimental Designs Or Analyses: The experiments seem sound but there are some drawbacks: - One needs to know the values of false positive and false negative rates to initialize the algorithm. While the authors claim that the algorithm is insensitive to such parameters, there was not an extensive empirical evidence for this. It would be better to have a version of the theoretical algo , maybe with slightly worse guarantees, but that is robust to miss-specification of these parameters. - The augmented FPZ algorithm, which is the main technical contribution, is actually not implemented. Thus, the traditional "optimized" version of Karger may actually be faster in practice. - Details on n and m are omitted for the real datasets. Overall, both synthetic and real world graphs seem to be really tiny. - Figure 2A is missing error bars Supplementary Material: No I did not. Relation To Broader Scientific Literature: Perhaps the datasets in [1] below could be a good test bed since they test learning augmented graph algorithms on actual large graph datasets (which are varying across time so getting predictions is quite natural). [1] Triangle and Four Cycle Counting with Predictions in Graph Streams. ICLR 2022 Essential References Not Discussed: None, the submission is pretty self contained. Other Strengths And Weaknesses: - Pro: boosting the weight of an edge based on its prediction seems a natural idea and it's great that they could execute this plan. I think this could be a useful meta idea for future learning augmented works and i find this conceptually very interesting. - Con: Vertex predictions seem more natural in this case? For example, edge predictions could lead to inconsistencies (that maybe one can trivially fix which doesn't seem that natural?) For example, if the prediction says edge (a,b) is in the min cut and (c, d) is in the min cut, then it should imply something about the edges between these vertices as well. However this information is not used or doesnt seem to matter which I find a bit strange. Overall, I am leaning towards acceptance. Other Comments Or Suggestions: None. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. We would like to highlight the following points: + **Theorem 1.2:** We agree with the reviewer’s observation regarding Theorem 1.2, but this is already taken care of in the theorem statement. Note that the bound in Theorem 1.2 is an ($\eta$-weighted) geometric mean of $m$ and $n^2$, and hence, is always $\Omega(m)$. + **Insensitivity to oracle parameters:** Regarding empirical evidence of robustness to knowledge of false negatives and false positives, we refer the reviewer to the first set of experiments, as shown in Section 5.1 and Figure 1. In these, regardless of the value of $\eta, \rho$, without any tuning, we set $(B, t) = (n,2)$ in the algorithm. Yet the algorithm demonstrates a marked improvement over Karger’s empirically for a wide variety of settings. + **Vertex-partitioning predictions:** We appreciate the suggestion to investigate vertex-partitioning predictions. In our setting, predictions on edges provide richer information about local pairwise interactions, and our analysis leverages these interactions for robust guarantees. As stated, we do not place any restriction on such predictions, i.e., they may not come from a cut. Note that vertex-based binary predictions can always be translated to edge-based predictions, while the reverse translation is not always possible; thus our prediction model is more generally applicable. The appropriate treatment of probabilistic predictions (as almost all ML models avail us) is also more subtle for vertex-partitioning predictions. All said, deriving faster algorithms based on vertex-based predictions with runtime parametrized by the right characterization of error in such models remains an excellent direction for future research.
null
null
null
null
null
null
Improving Flow Matching by Aligning Flow Divergence
Accept (poster)
Summary: The paper makes a very keen insight that the "TRUE" goal of flow matching is to approximate the probabilty time series $(t \mapsto p_t)$ with the approximate probability time series $(t \mapsto \hat{p}_t)$. In doing so, they begin from the difference $\hat{p}_t - p_t$ and that, in order to make this small in terms of total variation, one must not only align the corresponding vector field v that satisfies the CE wrt p_t, but also the divergence. In particular if $(p_t, v_t)$ and $(\hat{p}_t, \hat{v}_t)$ are both the CE satisfying solutions, then both - $| v- \hat v|$ as wellas - $| \nabla \cdot v- \nabla \cdot \hat{v} |$ must become small. Just as there is a conditional-localized counterpart ($L_{CFM}$) for the loss of $|v- \hat v|$ ($L_{FM}$)through the trick resembling Denoising score matching, they construct the localized counterpart $L_{CDM}$ for the original target about $| \nabla \cdot v- \nabla \cdot \hat{v} |$ ($L_{DM}$) . Their proposal is to balance both of them and simultaneoulsy optimize them at the same time. Unlike $L_{CFM}$ that is known to be just a scaler different from $L_{FM}$, they were able to show only $L_{DM} < L_{CDM}$. But this inequality is in the friendly direction, because LHS can be made smaller by making RHS smaller. They first show that this is obviously effective on synthetic dataset in terms of both final accuracy and the speed of learning. They showcase their result on categorical domain (DNA-Sequence) as well as PDE, and even Video Prediction with latent modeling. Claims And Evidence: Their claims are mathematically proven. The efficacy of the model is proven both on synthetic dataset, dataset of continuous domain, dataset of discrete domain, and dataset of practically an infinite dimensional domain. Methods And Evaluation Criteria: The proposed method is general in nature, and I believe that it shall be evaluated not on the basis of the SOTA result on each domain but on the basis that it can make improvement on the competitive method based on FM, which is done properly with reasonable metric. Theoretical Claims: Theoretical claims seems mostly valid on the appendix. Experimental Designs Or Analyses: Experiment on the Dynamical system and Video prediction seems particulaly well designed and their evaluations seems to be in alignment with the conventional metric. Supplementary Material: If it means the Appendix section and the proofs therein, yes, but I did not follow every single line of algebraic manipulations. Relation To Broader Scientific Literature: NA Essential References Not Discussed: Nothing in particular that came to my mind. Other Strengths And Weaknesses: ## Strengths: - The paper is well motivated - The proposition is convincing, and it is theoretically supported. - When the exact same technique as CFM did not work for the DM, the paper proposed a work-around solution of upper bound, which is very noble - experiments are well designed, and the results are promising ## Weakness: - I will ask a question or two in the questions section - While the paper's proposition is convincing, the proposition introduces an hyperparameter(the balancing of CDM and CFM) whose meaning is a little hard to interpret in terms of data (it is clear in terms of definition, but its choice seems hard to make based on the nature of the data itself). This might however be somehow inferrable from the choice of the path distributions from which to create the supervisory signal (using OT, for example, make a difference about the optimal choice of $\lambda$. It will be great if some knowledge can be shared regarding the relation between the choice of conditional vector field and the suggested choice of $\lambda$s. Other Comments Or Suggestions: Please see the section above. Questions For Authors: ### Q1 While I find the localization of DM with CDM very inspiring, as is noted in the paper this requires the computation of $p(x|x_1)$, which in the pure FM setting is a delta function, which would cause a problem. Indeed, this is not an essential problem of the philosophy of the work itself, beause DM only looks at $\nabla \log p_t(x)$, in which $p_t$ is guaranteed to be absolutely continuous whenever $p_0$ is. As is mentioned in the early part of the manuscript, the paper is resolving this problem by assuming the path to be taking the form of $$ \psi(t | x_1, x_source, \epsilon) = (1-t) x_{source} + t x_1 + \sigma_t \epsilon$$ where $\epsilon$ is a Gauss, and in experiments $\sigma_t$ is modeled in VP or VE way, and derive the corresponding vector field. But this $\sigma_t$ looks a bit artificial, especially when $x_{source}$ is not Gaussian, for example--- and one of the strength of FM lies in its application of mapping an arbitrary distribution to another. It feels that, when the method is combined with the works like (Simulation-free Schrödinger bridges via score and flow matching), things would also align really well. Has this work been considered to be applied to Schrodinger Bridge? ### Q2 Howq does the computational overhead scale by the addition of $L_{CDM}$? I very much believe that the philosophy of this work deserves much attention, but the gradient of $\nabla \cdot \hat{v | \theta}$ would, in the process of backpropagation, require second derivatives, and that sounds a bit scary from the engineering perspectives when we think of scaling this up to the level of very large networks. Would you please comment on that? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. We have revised the paper according to all reviewers’ feedback. In what follows, we provide point-by-point responses to your comments. ---- **Q1. It will be great if some knowledge can be shared regarding the relation between the choice of conditional vector field and the suggested choice of $\lambda$s.** **Response:** It is an interesting comment. Our design of the new loss function - Equation (19) of our submission - is based on the empirical observation and discussion in Lines 268-274 (left column) and 220-229 (right column) of our submission. During our submission, we did not consider the meaning of $\lambda$s in terms of data, but it is an interesting future direction to design a principle to choose $\lambda$s optimally. We have acknowledged this point in the revised manuscript. ---- **Q2. While I find the localization of DM with CDM very inspiring, … It feels that, when the method is combined with the works like (Simulation-free Schrodinger bridges via score and flow matching), things would also align really well. Has this work been considered to be applied to Schrodinger Bridge?** **Response:** Thank you for the very insightful comments. We have not considered the Schrodinger bridge yet, while we believe it is indeed an interesting research problem. We have briefly discussed this and cited the related reference that the reviewer pointed out in the revised manuscript. In the flow matching setting, the conditional probability path and the associated vector field are often designed on a per-sample basis. Therefore, our proposed CDM is tractable, similar to the baseline conditional flow matching. ---- **Q3. How does the computational overhead scale by the addition of $L_{CDM}$? The gradient of $\nabla\cdot \hat v(x,\theta)$ would, in the process of backpropagation, require second derivatives, and that sounds a bit scary from the engineering perspectives when we think of scaling this up to the level of very large networks. Would you please comment on that?** **Response:** In practice, the divergence term $\nabla\cdot \hat v(x,\theta)$ is computed efficiently using the Hutchinson estimator (see Lines 307-309 in our submission). We follow the approach used in [1] and [2], which employs the Hutchinson estimator to approximate the divergence term. Then, we only need to compute the derivative of a scalar-valued output with respect to the input to construct the loss, which involves a single call to torch.autograd.grad() and adds just one additional backward pass. Since we're only doing one extra backward pass, the additional memory footprint and computational time of computing the divergence loss scales similarly to that of the original CFM loss with respect to the data dimensionality. We compared the training times and peak memory usage, and found that incorporating the divergence loss does not significantly hinder the model's usability, with computational time and memory footprint remaining mostly within 1.5 times that of the original CFM. [1] Lu, Cheng, et al. "Maximum likelihood training for score-based diffusion odes by high order denoising score matching." International Conference on Machine Learning. ICML, 2022. https://arxiv.org/pdf/2206.08265 [2] Lai, Chieh-Hsin, et al. "Fp-diffusion: Improving score-based diffusion models by enforcing the underlying score fokker-planck equation." International Conference on Machine Learning. ICML, 2023. https://arxiv.org/pdf/2210.04296 ---- Thank you for considering our rebuttal.
Summary: The paper proposes a very simple KL loss combined with CFM loss to improve the training of flow-based models. The optimizing results are very general across different tasks with a basic improvement. Claims And Evidence: NA Methods And Evaluation Criteria: method Theoretical Claims: NA Experimental Designs Or Analyses: NA Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: S: 1\ Demonstrates consistent improvements over CFM across diverse tasks (synthetic data, dynamical systems, DNA, videos) with minimal computational overhead. 2\ Avoids costly higher-order score matching, making divergence alignment computationally practical. W: 1\ Lack of implementation details which may indicate unfair comparisons. Other Comments Or Suggestions: 1\ The organization of the contribution part should be improved. 2\ Quantify the individual impact of the divergence loss term L_cdm on performance. Questions For Authors: 1\ How does FDM’s training time and memory footprint scale with data dimensionality especially for high-resolution videos? 2\ Can FDM be applied to non-Gaussian or non-OT probability paths Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. We have revised the paper according to all reviewers’ feedback. In what follows, we provide point-by-point responses to your comments. ---- **Q1. Lack of implementation details which may indicate unfair comparisons.** **Response:** In our submission, we have provided experimental and implementation details in Appendix B. To ensure fair comparisons with benchmarks, we use the same neural network architectures and model hyperparameters as the baseline methods. In particular, we use the same dataset and splitting, and employ the same search space for the batch size with the same training iterations. --- **Q2. The organization of the contribution part should be improved.** **Response:** Thank you for your suggestion. We have made the contribution section more concise and bulleted the key points in the revised paper. --- **Q3. Quantify the individual impact of the divergence loss term $L_{CDM}$ on performance.** **Response:** Our training loss is a weighted sum of $L_{CFM}$ and $L_{CDM}$ as shown in Equation (19) of our submission. Only using $L_{CFM}$ is the same as classic conditional flow matching, while, as we state in Lines 268-270 of our manuscript, directly minimizing $L_{CDM}$ cannot yield appealing results. Detailed explanations are given after those lines. In particular, we notice empirically that training the model by minimizing $L_{CDM}$ alone can be quite noisy. Using a weighted sum of $L_{CFM}$ and $L_{CDM}$ can improve the performance of the classic conditional flow matching, solidifying the benefits of our proposed approach. --- **Q4. How does FDM’s training time and memory footprint scale with data dimensionality, especially for high-resolution videos?** **Response:** Compared to the standard conditional flow matching (CFM), our proposed loss function involves an additional divergence term and essentially has a similar scalability as CFM. In practice, instead of explicitly computing the Jacobian matrix, we follow the approach used in [1] and [2], which employs the Hutchinson estimator to approximate the divergence term (see Lines 307-309 in our submission). Then, we only need to compute the derivative of a scalar-valued output with respect to the input to construct the loss, which involves a single call to torch.autograd.grad() and adds just one additional backward pass. Since we're only doing one extra backward pass, the additional memory footprint and computational time of computing the divergence loss scales similarly to that of the original CFM loss with respect to the data dimensionality. We compared the training times and peak memory usage, and found that incorporating the divergence loss does not significantly hinder the model's usability, with computational time and memory footprint remaining mostly within 1.5 times that of the original CFM. Regarding the video experiment mentioned by the reviewer, we first use a pretrained model to map the video data into a latent space. As a result, the computational cost of the flow matching model depends only on the dimensionality of the latent variables, not the raw video data. [1] Lu, Cheng, et al. "Maximum likelihood training for score-based diffusion odes by high order denoising score matching." International Conference on Machine Learning. ICML, 2022. https://arxiv.org/pdf/2206.08265 [2] Lai, Chieh-Hsin, et al. "Fp-diffusion: Improving score-based diffusion models by enforcing the underlying score fokker-planck equation." International Conference on Machine Learning. ICML, 2023. https://arxiv.org/pdf/2210.04296 --- **Q5. Can FDM be applied to non-Gaussian or non-OT probability paths?** **Response:** Yes, FDM can be applied to non-Gaussian paths. In our DNA sequence generation experiment, we use a Dirichlet probability path, which was designed in [3]. [3] Stark et al. Dirichlet Flow Matching with Applications to DNA Sequence Design, ICML, 2024. ------ Thank you for considering our rebuttal.
Summary: The paper proposes a modification to the flow matching / stochastic interpolant loss so as to better control the total variation distance between the model and the target at the final time of sampling, motivated by the fact that the standard loss is not sufficient to control the KL divergence (based on some assumptions on the target). The modification they propose is a sort of "divergence matching" loss whereby the divergence of the model vector field is trained to match the divergence of the ground truth vector field defined by the interpolant. They test the method on synthetic datasets used in previous works, trajectory sampling, dna sequencing, and video forecasting. Claims And Evidence: The motivation for the project is clear. They seek to control a divergence metric for their loss for a deterministic, ODE-based flow model. Other work has only done this using higher-order derivatives than the ones they consider here. However, the related work can get control on the KL, while this work is only on the TV. The authors acknowledge this fact. One issue with the application of the method in the experiments that I'd like the authors to clarify: At the end of page 6, the authors state "we need to express the score function $\nabla \log p_t(x)$ learned by diffusion models in terms of the learned vector field $v_t (x, θ)$. This limits us to choosing a conditional probability path corresponding to an SDE with a known drift term $f$ and noise coefficient $g$." Note that this relation between the velocity field and the score *is only true if you have actually found the velocity field associated to the interpolant/probability path you specified*. If you have not learned this, then the equation relating $v_t$ to $s_t$ is not valid. *Moreover*, this score only corresponds to the actual $\nabla \log p_t$ if the velocity is exactly learned. So it is not clear that the equation at the bottom of page 6 is an exact relation, and would introduce extra errors. Methods And Evaluation Criteria: The proposed benchmarks make sense. They are basically doing a set of controlled trials comparing the standard method to their modification. It would also be nice to see how this performs versus the broader literature, though. Theoretical Claims: I checked the correctness of the proofs. They seem like decently straightforward applications of Jensen and Young's inequality. One thing I was looking for in particular but did not necessarily find: The bounds rely on an analytic form for the score $\nabla \log p_t(x)$ arising from solving the continuity equation up to time $t$. This quantity is *not* equivalent to the score arising from the model i.e. $\hat s_t \neq \nabla \log p_t$, nor is it something that can be written down by relating a learned velocity field to a score. Experimental Designs Or Analyses: There is a wide breadth of experiments based on existing applications of generative models that people have tried. The method seems to preform better, but only marginally. A question, then, is: given the fact that loss is more expensive (i.e you have to compute and match divergences), how does one measure the advantage of the technique properly? Can In addition, how does the method compare for forecasting videos to: https://arxiv.org/pdf/2403.13724 ? It seems they report much lower FIDs, but also claim that Latent FM gets lower FIDs as well. It is probably worth citing this relevant literature, which seems to excel at this task. Supplementary Material: I reviewed the proofs. They seem correct. I was checking to see if they relied on this false equivalence between a learned score and $\nabla \log \p_t$ but I didn't see such an issue. Relation To Broader Scientific Literature: The contribution is to describe an avenue for improving a very commonly used generative modeling paradigm. There is not existing work as far as I know on divergence matching this way. Essential References Not Discussed: The authors should probably also cite Liu et al, Flow Straight and Fast (2023) for its contributions to the flow matching literature as well. Other Strengths And Weaknesses: In general, it would be nice to have a better understanding of the memory footprint given the divergence you have to compute. Does the method work when replaced with a hutchinson estimator, or is it too noisy? Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. We have revised the paper according to all reviewers’ feedback. In what follows, we provide point-by-point responses to your comments. --- **Q1. At the end of page 6, … it is not clear that the equation at the bottom of page 6 is an exact relation, and would introduce extra errors.** **Response:** Thank you for your comment, which helped us identify areas where we can improve our presentation, and we have revised the paper accordingly. To address your concern, we stress that our theoretical results do not depend on this relation for the learned vector field $\mathbf{v}_t(x,\theta)$. As the reviewer noted, this relation holds only for the ground-truth vector field $\mathbf{u}\_t$ and the score function of its corresponding probability flow $p_t$ (due to the uniqueness of solutions to the continuity equation). Our derivations rely solely on the continuity equation and initial conditions, not on this specific relationship, which aligns with the reviewer’s observation. Our original intent in introducing this relation was to define a flow-matching model with the vector field $\mathbf{u}\_t$ corresponding to the score-based diffusion model with drift term $\bf f$ and noise coefficient $g$. Notice that approximating $\mathbf{u}\_t$ with a parameterized vector field $\mathbf{v}\_t(x,\theta)$ is enough to generate data effectively. However, for the numerical results (e.g., conditional probability computations in Table 2), we require an approximation of the score function $\nabla \log p_t$. Instead of computing $\nabla \log \hat p\_t$, we substitute the learned vector field $\mathbf{v}\_t(x,\theta)$ into the following relation to estimate the score: $$\nabla \log p\_{1-t}(x) = 2\frac{\mathbf{u}\_t(x)+{\bf f}\_{1-t}(x) }{g^2_{1-t}}.$$ While this introduces an approximation error, it is a practical trade-off for computational feasibility. --- **Q2. The bounds rely on an analytic form for the score $\nabla\log p_t(x)$ arising from solving the continuity equation up to time $t$. This quantity is not equivalent to the score arising from the model i.e. $\hat s_t\neq \nabla \log p_t$, nor is it something that can be written down by relating a learned velocity field to a score.** **Response:** Indeed, computing $L_{DM}$ requires $\nabla\log p_t(x)$, which is derived by solving the continuity equation. To address this challenge, we introduce its conditional variant $L_{CDM}$. For $L_{CDM}$, we only need access to $\nabla\log p_t(x|x_1)$. In practice, we design $p_t(x|x_1)$ such that this quantity is readily available by plugging in values, bypassing solving an ODE explicitly. This approach ensures computational feasibility while maintaining the integrity of our bounds; see Theorem 4.2. --- **Q3. The method seems to perform better, but only marginally. Given the fact that loss is more expensive …, how does one measure the advantage of the technique properly?** **Response:** The improvements for all tasks except trajectory sampling for dynamical systems are way over twice the standard deviation. For dynamical systems, we further perform a t-test with the null hypothesis that the mean negative log-likelihoods (NLLs) computed by two distinct models are equal. We use 32,000 test trajectories to compute each mean NLL. The t-test confirms that the improvement in the likelihood estimation of FDM over FM is significant. Our approach introduces additional computational cost, but it gives better generation results and better likelihood estimation. For tasks requiring accurate likelihood estimation, our approach is especially more valuable compared to the baseline flow matching. --- **Q4. How does the method compare for forecasting videos to [1]?** **Response:** Thank you. We have cited this paper in the revision. Indeed, the results in [1] are impressive. We only found the codes for Navier-Stokes and CIFAR at github.com/interpolants/forecasting, which is the reason we did not include this approach as a benchmark in our study. [1] https://arxiv.org/pdf/2403.13724 --- **Q5. The authors should probably also cite Liu et al, Flow Straight and Fast (2023) for its contributions to the flow matching literature as well.** **Response:** We have cited this paper in the revision. --- **Q6. Memory footprint in computing divergence. Does the method work when replaced with a Hutchinson estimator?** **Response:** In our implementation, we use the Hutchinson estimator to approximate the divergence term; see Lines 307-309 in our submission. It works well and does not significantly raise the memory footprint, which consistently stays within 1.5×. For the 2D tasks, we test both computing the Hutchinson divergence estimator and the exact one; both result in nearly identical results. We use the Hutchinson estimator in all the other experiments due to memory constraints, following [2]. [2] https://arxiv.org/pdf/2206.08265 --- Thank you for considering our rebuttal. --- Rebuttal Comment 1.1: Comment: Thanks for the information. I'll maintain my score, as it is already a weak accept!
Summary: The paper seeks to use PDEs to construct a theoretical bound on flow matching, and improve upon it using said insight by adding a divergence mismatch to the loss term which improves upon the probability path. They construct experiments on simple generative examples, along with DNA sequence generation and video prediction and show that it improves upon existing methods. Claims And Evidence: Their claims seem to be sufficiently supported mathematically and experimentally. I would like to see the performance of their method on some more diverse video datasets or standard image benchmarks for generative modeling (CIFAR, Imagenet). Methods And Evaluation Criteria: The non-toy experiments seem to be the DNA sequence generation and the KTH dataset. The KTH dataset isn't used usually for generative modeling benchmarks due to its limited diversity in terms of samples. It works as a simple example but some more diverse motion datasets such as BAIR (as done in Davtyan et. al (2023)) could be more interesting. Evaluation criteria seem to be the de-facto norm for all the experiments. Theoretical Claims: I checked section 3 for correctness, and the claims seem to be valid. Experimental Designs Or Analyses: Specific experiment soundness seem valid and include error bars for some of the experiments, which seems sufficient. Supplementary Material: No. Relation To Broader Scientific Literature: The paper offers some key insights for flow-based generative modeling by refining the flow matching approach, which also offer enhanced likelihood estimation. They also demonstrate their strengths with applications in structured data such as dynamical systems, dna sequences, and videos. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: Minor typos (does not effect valuation) L_{FDM} is used in the graph of Figure 1, but caption reads L_{DM} Questions For Authors: None. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. We have revised the paper according to all reviewers’ feedback. In what follows, we provide point-by-point responses to your comments. ---- **Q1. I would like to see the performance of their method on some more diverse video datasets or standard image benchmarks for generative modeling (CIFAR, Imagenet).** **Response:** During the rebuttal period, we conducted experiments to showcase the efficacy of our proposed FDM in improving flow matching (FM) for these reviewer-mentioned tasks. We first compare the performance of FDM against the baseline conditional FM (CFM) for generative modeling of CIFAR10 here, and we will show the advantages of FDM for other video generation tasks in **Q2**. We follow the experimental settings in Appendix E of the flow matching baseline paper [1] to train the model for CIFAR10 generation. Due to the time constraint, we only compare our FDM against CFM for CIFAR10 generation using the better-performing optimal transport (OT) path in the baseline paper [1]. The following results confirm the advantages of FDM. | Method | NLL ↓ | FID ↓ | |-------------|------------|-----------| | CFM | 2.99 | 6.35 | | **FDM (ours)** | **2.85** | **5.62** | [1] https://arxiv.org/pdf/2210.02747 --- **Q2. The KTH dataset isn't used usually for generative modeling benchmarks due to its limited diversity in terms of samples. It works as a simple example but some more diverse motion datasets such as BAIR (as done in Davtyan et. al (2023)) could be more interesting.** **Response:** We appreciate your feedback. For the BAIR dataset, we predict 15 future frames based on a single initial frame, with each frame having a resolution of $64 \times 64$ pixels. Due to the highly stochastic motion in the videos of the BAIR dataset, we evaluate the model as follows (following [2]): We randomly select 256 test videos, and generate 100 samples per test video where each model is conditioned on the same initial frame of the video. Finally, we compute the FVD for each model comparing the $256\times 100$ generated samples against the 256 test videos. Due to time constraints, we omit training the frame refinement network, which operates independently of the main model but could potentially enhance sample quality. The result is as follows: | Method | FVD ↓ | Memory (GB) | Time (hours) | |-------------------------|------------------|-----------|----------------| | TriVD-GAN-FP | 103 | 1024 | 280 | | Video Transformer | 94 | 512 | 336 | | LVT | 126 | 128 | 48 | | RaMViD (Diffusion) | 84 | 320 | 72 | | Latent FM | 146 | 24.2 | 25 | | **Latent FDM (ours)** | **123 ± 4.5** | 35 | 36 | As mentioned in [2], many models for the BAIR task are computationally expensive, whereas latent FM achieves a favorable trade-off between FVD and computational cost. Our approach further improves latent FM with acceptable additional computational overhead. [2] https://arxiv.org/abs/2211.14575 ---- **Q3. Minor typos (does not effect valuation) L_{FDM} is used in the graph of Figure 1, but caption reads L_{DM}** **Response:** Thank you. We have fixed this in the revised manuscript. ------ Thank you for considering our rebuttal.
null
null
null
null
null
null
Unifying 2D and 3D Vision-Language Understanding
Accept (poster)
Summary: The paper proposes UniVLG, a model that can be trained on both 2D and 3D vision-language data for both 2D and 3D tasks. Specifically, the model relies on pre-trained 2D image features and lifts 2D data to 3D to take advantage of large-scale 2D datasets. It also defines a mask decoding head which outperforms bounding box decoders. The method directly takes sensor-generated data as input which is a more realistic evaluation setting than previous methods using mesh-reconstructed point clouds. Experiments on tasks including 3D referential grounding, 3D question answering and 2D referential grounding demonstrate the effectiveness of this method. ## update after rebuttal The rebuttal addressed my questions so I raise my score. Claims And Evidence: The paper made the following claims: 1. Its method unified 2D and 3D visual grounding. It is illustrated by experiments on both 2D and 3D referential grounding tasks with the same model. 2. It achieves state-of-the-art performance on both in-domain and out-of-domain 3D referential grounding datasets, which is shown by experiments results. 3. It proposes a language-conditioned 3D mask decoder, and its superior performance was shown in the ablation study (table 5). 4. It uses a realistic evaluation setting, which is also shown in experiments (Table 1 and Table3). Methods And Evaluation Criteria: The methods and evaluation make sense. 1. For the method, there are way more 2D datasets than 3D ones, so it makes sense to utilize the vast body of 2D data for enabling 3D tasks. The network design also makes sense to me. 2. The method is evaluated on several tasks including 3D and 2D referential grounding, 3D question answering and 3D instance segmentation using widely used benchmarks. Theoretical Claims: No theoretical claim made. Experimental Designs Or Analyses: The experimental designs and analyses make sense to me. It did experiments on three tasks in the main paper and one in the appendix. It also did a thorough ablation study to validate key designs. To make fair comparisons, it evaluated all methods using the same point clouds as well as retraining a subset of methods on sensor point clouds. Supplementary Material: I reviewed the whole appendix. No further question. Relation To Broader Scientific Literature: 1. The paper proposed a way to utilize the rich 2D data for 3D vision language understanding tasks, which largely improved their performance and bridges the gap between 2D and 3D models. 2. The paper proposed a unified architecture of both 2D and 3D tasks. 3. There are several findings that could inspire future research, e.g. the comparison between using non-parametric and parametric queries, and the essence of visual tokens updating during mask decoding. Essential References Not Discussed: NA Other Strengths And Weaknesses: 1. Strengths please see sections above. 2. Weaknesses: (1) Qualitative results are included in the appendix now (figure 3 and 4). They should be included in the main paper. (2) Since the paper claims it has a more realistic embodied-aligned evaluation setting, it would be beneficial to include results on data without ground truth pose and depth. Currently there are experiments on noisy pose and depth on SR3D in the appendix but I am more interested in real-life data instead of manually adding gaussian noises. Other Comments Or Suggestions: 1. Duplicate "performance" in L73. 2. Duplicate in L668-669 Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review. > (1) Qualitative results are included in the appendix now (figure 3 and 4). They should be included in the main paper. Thank you for the feedback. We agree that these should be included in the main paper and we will incorporate them using the additional page given for the camera-ready version. > (2) Since the paper claims it has a more realistic embodied-aligned evaluation setting, it would be beneficial to include results on data without ground truth pose and depth. Currently there are experiments on noisy pose and depth on SR3D in the appendix but I am more interested in real-life data instead of manually adding gaussian noises. We want to clarify that our experiments **do not use "ground-truth" pose or depth.** The SR3D, NR3D, and ScanRefer datasets rely on 3D scenes from ScanNet, which captures depth using a Kinect sensor and estimates camera poses with the widely used BundleFusion algorithm. Therefore, the depth and pose data in our experiments are already realistic and embodied-aligned, naturally containing real-world noise. The experiments in Appendix A.9 further explore the effects of adding additional noise beyond the existing real-world sensor noise. We are happy to clarify any remaining concerns that you may have that might lead you to reconsider increasing your score. Thank you.
Summary: This paper proposes a unified architecture for 2D and 3D vision language understanding. The method is based on Jain et al. 2024 where the additional innovations are in sharing all parameters between 2D and 3D instead of a subset, and extending the application to referential grounding, The paper uses a. number of existing SOTA methods in different parts of their pipeline to start with a special finding that instead of freezing the visual features, updating them is very crucial for 3D referential grounding. Claims And Evidence: The claims are in unified 2D-3D visual grounding by updating the visual features for improved referential grounding and language conditioned mask decoder. The evidences are there in the results section through quantitative and qualitative results. Methods And Evaluation Criteria: The method is primarily adjusted from Jain et al. 2024 with an update of visual features, language conditioned mask decoder. The evaluations are done on out-of-domain referential grounding, 3D question answering, 2D referential grounding. The incremental additions in the method seems working. Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: Yes the experimental designs are good. Supplementary Material: No supplementary material Relation To Broader Scientific Literature: The contribution is incremental w.r.t the broader literature. Essential References Not Discussed: Yes the references are good. Other Strengths And Weaknesses: The writing of the paper is good, the method is well positioned and results are validated. The paper tries to state where they innovate and the impact of the innovations. I like the way this paper is written. Other Comments Or Suggestions: Nothing. Questions For Authors: Can we know if there are any adverse effect of changing the visual features? What if the points from wang et al. has noise? How the method behaves? Can we see the videos of segmentation in a supplementary material? What are the failure cases? What is the future work? Why maskformer had to introduced? Line 194-200 is bit convoluted. Please make them simple and illustrative. How this method perform w.r.t LLava-3D? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. > “special finding that instead of **freezing** the visual features, updating them is very crucial for 3D....” We want to clarify a potential misunderstanding: One of our significant findings is not in unfreezing the visual features but rather on allowing them to attend and propagate through the decoder (Table 7 & 5). In contrast, prior mask decoders **allow visual features to receive gradients** yet prevent them from attending to object queries and language features. As we reiterate below, this is just one aspect of our contribution. > “Incremental over Jain et al., 2024 (ODIN); Incremental w.r.t the broader literature.” We offer the following counterarguments and invite you to reconsider your position: - UniVLG tackles referential grounding and question-answering, while ODIN focuses solely on object segmentation. - UniVLG significantly modifies the mask decoder to better incorporate language information. Our design choices—updating visual features, using parametric queries, and adding a box loss—are crucial for referential grounding (Tables 5 & 7). Reviewer aKaE acknowledges this, stating that insights like “updating the visual feature” are novel contributions. - We generate synthetic 3D data by lifting 2D RGB images into 3D pointmaps, sharing all parameters across 2D and 3D pathways rather than a subset. Reviewers aKaE, 5zKN, and SyhT recognize this as a unique strength or “novel.” With these advancements, **UniVLG outperforms ODIN by over 25% and LLaVA-3D by 9.4% on 3D language grounding, underscoring the importance of our design choices.** None of these changes were trivial or obvious, and lead to significant performance improvements. **Broader Impact**: - **UniVLG demonstrates that unified 2D-3D visual language models can enhance data-scarce 3D modalities without sacrificing 2D performance.** The 3D VLM field is bottlenecked by a lack of large-scale, diverse datasets, and we believe that showing ways of using 2D datasets and pre-trained weights for VLMs is an important contribution. While **ODIN also tried this for 3D segmentation task, its strategy of skipping 3D layers for 2D data leads to suboptimal performance (Table 6) and near-zero 2D-to-3D generalization (Table 12).** We not only show more diverse results than ODIN, we also improve sharing between the 2D and 3D modalities further and benchmark all our methods on more realistic setups which is “largely overlooked by prior studies” (Reviewer aKaE, a8kZ) in 3D vision-grounding literature. As Reviewer SyHt notes, “There are several findings that could inspire future research, e.g. the comparison between using non-parametric and parametric queries, and the essence of visual tokens updating during mask decoding." - **UniVLG achieves SOTA results in 3D referential grounding**, a highly active field with dozens of papers published annually. It surpasses prior work by over 10%. If our approach were merely an incremental change over ODIN, a similar baseline would already exist. Furthermore, if a seemingly “incremental” modification yields substantial improvements—especially in more realistic settings—we argue that it is highly relevant to the community and is worthy of dissemination. Thank you for your other questions! Our paper and supplementary file address almost all of them. It seems you may have missed our supplementary material, which starts on Page 13 of the main PDF. Here are the relevant sections for your convenience: > "Adverse effect of changing the visual features with noisy inputs?" See Section A.9 (Page 15) and Figure 6 (Page 19), where we experiment with varying noise levels in camera pose and depth maps. Our method remains robust to noisy inputs. > “Videos of segmentation in suppl.?” Extensive qualitative visualizations are available in Figure 3 (Page 16), Figure 4, and Figure 5 (Page 17) of appendix. > Failure cases? See Section A.8 (Page 15) for a detailed discussion and visualizations of failure modes. > Future work? - We currently train with significantly less 2D data than SOTA 2D VLMs. A natural next step is scaling with more 2D data and studying its impact. - Our method is designed for static 3D scenes; extending it to dynamic 3D environments is an important future direction. We will include a dedicated future work section in the camera-ready version. > “Why maskformer had to introduced?” We built our mask decoder head on Mask2Former, as we find that it outperforms box-decoding heads (Tables 5 & 7). Additionally, mask decoding unifies the output space across 2D and 3D via per-pixel segmentation masks. > "Lines 194-200 are convoluted. Please simplify." Thank you for the feedback. We will revise this for clarity. > "How this method perform w.r.t LLava-3D?" As shown in Table 1, UniVLG outperforms LLaVA-3D by 9.4% on ScanRefer, despite LLaVA-3D using mesh point clouds while UniVLG relies on sensor point clouds. We will move some of these to the main paper. --- Rebuttal Comment 1.1: Comment: Many of the questions are answered in the rebuttal and hence raising the scores. Thank you!
Summary: This paper presents UniVLG, a unified vision-language model designed to bridge the gap between 2D and 3D vision-language understanding in embodied AI systems. Given the scarcity of well-annotated 3D datasets, UniVLG explores the transfer of vision-language knowledge from well-curated 2D data to enhance 3D reasoning. The model leverages pre-trained 2D VLMs and is trained on a diverse set of 2D and 3D vision-language tasks, allowing effective cross-modal learning. UniVLG processes 2D images or RGB-D inputs during training and inference, eliminating the need for explicit 3D mesh reconstructions. This design makes it more aligned with realistic embodied AI applications, where direct sensor data is often the primary input. Claims And Evidence: The paper focuses on unifying 2D and 3D vision-language understanding by leveraging 2D-to-3D lifting strategies to enhance 3D reasoning. UniVLG achieves SOTA performance on multiple 3D referential grounding and question-answering benchmarks. Comprehensive experiments support the claims and demonstrate the effectiveness of transferring 2D vision-language knowledge to 3D tasks. Methods And Evaluation Criteria: The methods and evaluation criteria used in this paper are well-designed and appropriate for the problem setting. Theoretical Claims: The paper does not make explicit theoretical claims or include formal proofs. Its contributions are primarily focused on model design and performance improvements through practical innovations such as the 2D-to-3D lifting strategy and the language-conditioned mask decoder. Experimental Designs Or Analyses: The paper demonstrates strong performance across a range of 3D understanding tasks, including referential grounding and question answering. While the model’s 2D performance is not degraded, the discussion on 2D results is relatively limited. Supplementary Material: I reviewed the additional experiments in the appendix, which further demonstrate the effectiveness of the proposed method. Relation To Broader Scientific Literature: The paper builds upon recent advances in vision-language models and leverages DINO features for visual encoding and Moge for generating point maps from 2D images. Additionally, its approach of using 2D-to-3D lifting strategies effectively bridges the gap between 2D and 3D vision understanding. By combining these ideas, UniVLG offers a unified framework that advances both 3D referential grounding and 3D question-answering tasks. Compared to previous methods, UniVLG eliminates the reliance on 3D mesh reconstructions, instead utilizing sensor data, making it better aligned with realistic embodied AI scenarios. Essential References Not Discussed: The citations in this paper are primarily limited to point clouds and NeRF (e.g., Panoptic-Lifting). However, relevant works on 3D Gaussian Splatting (3DGS) have been omitted, such as GOI [1] which leverages 2D RES models to achieve 3D RES. [1] Goi: Find 3d gaussians of interest with an optimizable open-vocabulary semantic-space hyperplane Other Strengths And Weaknesses: Strengths: 1. This paper presents UniVLG, a unified model capable of handling both 2D and 3D vision-language tasks, promoting seamless integration across modalities. 2. The paper employs 2D-to-3D lifting strategies to enhance 3D reasoning, improving adaptability to real-world embodied AI scenarios that rely on sensor data rather than 3D mesh reconstructions. 3. UniVLG achieves SOTA results on multiple 3D referential grounding and 3D question-answering benchmarks while maintaining non-degraded 2D performance. Weaknesses: 1. In Table 4, the comparison between the 2D-3D and 2D-only settings shows no performance degradation in the 2D-3D setting. However, since the baselines considered are primarily from work published before 2024, this raises the question of whether 2D referential grounding performance has reached its upper limit. 2. As shown in Appendix A.11 (2D-3D Generalization Test), the model does not appear to fully utilize the 2D knowledge, which may limit its potential to leverage broader 2D datasets for future improvements. Other Comments Or Suggestions: I have no additional comments or suggestions. Questions For Authors: I have no questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your feedback. > “The citations in this paper are primarily limited to point clouds and NeRF (e.g., Panoptic-Lifting). However, relevant works on 3D Gaussian Splatting (3DGS) have been omitted, such as GOI [1] which leverages 2D RES models to achieve 3D RES.” Thank you for this suggestion, we will add this to our related work. > “In Table 4, the comparison between the 2D-3D and 2D-only settings shows no performance degradation in the 2D-3D setting. However, since the baselines considered are primarily from work published before 2024, this raises the question of whether 2D referential grounding performance has reached its upper limit.” This is correct: we chose these baselines since other recent 2D grounding models train on an order of magnitude more data than the 2D datasets we used. Certainly, scaling with more 2D data is a direct avenue of future work. Our main point of this experiment was to show that, indeed, we can build a unified 2D-3D visual grounding model which benefits data-scarce 3D modality without sacrificing performance on 2D datasets. > “As shown in Appendix A.11 (2D-3D Generalization Test), the model does not appear to fully utilize the 2D knowledge, which may limit its potential to leverage broader 2D datasets for future improvements.” Absolutely! Our intention with this experiment was to explicitly demonstrate this generalization gap and inspire future research on achieving stronger 2D-3D generalization. Further improving this generalization will be crucial to truly utilize the 2D datasets for the 3D datasets. Happy to clarify any concerns that might come up and help increase your score further.
Summary: This paper presents a novel model called UniVLG for 3D vision-language tasks, including 3D visual grounding and 3D question-answering. By leveraging 2D visual grounding datasets, the model gains additional benefits, and the authors provide several empirical findings on improving performance—such as updating visual features. The proposed model demonstrates strong results on existing benchmarks. Claims And Evidence: Most claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: 1. The method section focuses mainly on 3D-based visual grounding but provides limited details on the 2D-based visual grounding, making that aspect unclear. 2. The authors mention a 2D–3D lifting with a 50% probability. What is the rationale for this design choice, and what purpose does it serve? Theoretical Claims: NA. Experimental Designs Or Analyses: 1. In Table 1, the GT accuracies for UniVLG on Sr3D and Nr3D are not provided. 2. For the DET and GT setups, does UniVLG differ only in how the predicted bounding box is chosen, with the inputs and inference pipelines remaining unchanged between these two settings? 3. The paper is titled “Unifying 2D and 3D Vision-Language Understanding,” yet the method design and experimental results focus predominantly on 3D vision-language understanding. The 2D data is used mainly to boost 3D performance, as shown by Table 4, where 3D data does not improve 2D performance. This suggests that the title may be overreaching. 4. A key contribution is the ability to leverage 2D data for joint training. To better illustrate this, it would be helpful to show the scaling effects of adding more 2D data—namely, how larger quantities of 2D data incrementally improve performance. Supplementary Material: I only review the "Additional Implementation details" in the supplementary material. Relation To Broader Scientific Literature: 1. The joint incorporation of 2D and 3D with 2D–3D lifting is novel. 2. Although the decoder follows approaches in previous works such as Mask2Former and “Grounded 3D-LLM with Referent Tokens” (which should be included in the related work), the empirical insights—for example, “updating the visual feature”—are new contributions. 3. The results are impressive, surpassing prior state-of-the-art performances. 4. The focus on projected point clouds instead of mesh point clouds is significant and has been largely overlooked by earlier studies. Essential References Not Discussed: 1. "Grounded 3D-LLM with Referent Tokens" (https://arxiv.org/abs/2405.10370) Other Strengths And Weaknesses: Overall, this work is impressive and appears to be a valuable contribution to the field. However, there is still room for further improvement. Please refer to the previous sections for more details. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review. We try to address your concerns below: > “The method section focuses mainly on 3D-based visual grounding but provides limited details on the 2D-based visual grounding, making that aspect unclear.” The method is identical between 2D and 3D visual grounding. The model takes as input a language query, N RGB images of shape N × H × W × 3, and an associated 3D pointmap of shape N × H × W × 3. For 2D datasets, we have N=1 frame and we obtain the pointmap using neural 2D-to-3D lifting modules. For 3D datasets we obtain a pointmap by unprojecting the RGB-D images and camera parameters. The output consists of segmentation masks for each object mentioned in the sentence, a corresponding text span that refers to each segmented object, and optionally, generated text that answers the question. The segmentation mask shares the same representation between 2D and 3D – it is obtained as a K X M output mask where K is the number of objects, and M represents the spatial dimension of the feature map. For a single RGB image, we flatten the 2D feature map resulting in M = H * W; In the multi-view (3D) case, we simply include the N frames as: M = N * H * W * F. All layers and losses are shared between the two. Let us know if this makes sense, we will make it more clear in our camera-ready version. > “The authors mention a 2D–3D lifting with a 50% probability. What is the rationale for this design choice, and what purpose does it serve?” The 2D-to-3D lifting can be imperfect, and although it helps in 3D referential grounding tasks (like on ScanNet datasets), the noise from this lifting can hurt the 2D-only grounding performance. Thus we feed the original 2D image 50% of the time, so that at test time, the model can perform better on 2D referential grounding. When we do not do the 2D-to-3D lifting, we simply skip all 3D layers, and only pass the images through the 2D layers. > “In Table 1, the GT accuracies for UniVLG on Sr3D and Nr3D are not provided.” The GT setup is not our main focus as it makes an unrealistic assumption of provided GT 3D boxes - we only include it for completeness as several prior methods report this number. We only train the UniVLG-3D-only model on the GT setup because in the joint setup we want to use 2D data, but using GT boxes is uncommon for 2D datasets — and we didn’t want to expend compute on that. > For the DET and GT setups, does UniVLG differ only in how the predicted bounding box is chosen, with the inputs and inference pipelines remaining unchanged between these two settings? Not quite. In the DET setup, UniVLG decodes a segmentation mask from scratch – it does not select a box out of a set of proposals. In the GT setup, we assume access to GT masks as input, we pool visual features inside the given ground-truth masks, and the object queries predict a segmentation mask over the “pooled” feature tokens, one token per object. Thus, GT setup assumes GT masks as input, DET setup does not assume any extra input (apart from RGB-D frames, camera parameters, and language). > “The paper is titled “Unifying 2D and 3D Vision-Language Understanding,” yet the method design and experimental results focus predominantly on 3D vision-language understanding. The 2D data is used mainly to boost 3D performance, as shown by Table 4, where 3D data does not improve 2D performance. This suggests that the title may be overreaching.” You’re right that the focus is on how to use 2D to help 3D performance, but one of the key ideas of this paper is: To achieve this goal of helping 3D performance with 2D data, unifying the model design is a promising direction. The design of UniVLG emphasizes on jointly training on both 2D and 3D data, sharing all model parameters and losses, and as we show in Table-4, it retains its 2D performance while helping 3D performance. > “A key contribution is the ability to leverage 2D data for joint training. To better illustrate this, it would be helpful to show the scaling effects of adding more 2D data—namely, how larger quantities of 2D data incrementally improve performance.” We agree, and this is an important direction for future work, in addition to scaling the total amount of 2D data further. However, we did not have the available compute resources to perform this additional experiment. > “Grounded 3D-LLM with Referent Tokens” should be included in the related work” We agree, thanks for pointing it out. Happy to clarify any concerns that might come up and help increase your score further. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ rebuttal, which addresses most of my concerns. However, I still strongly recommend reconsidering the title “Unifying 2D and 3D Vision-Language Understanding.” The primary focus of this work is clearly on 3D understanding, and the current title overstates the scope. Such overclaims may set a bad precedent for the field.
null
null
null
null
null
null
From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection
Accept (poster)
Summary: This paper introduces **P2P (From Pixels to Perception)**, an instance-wise feature selection method aimed at improving interpretability by selecting **grouped semantic regions** instead of individual pixels. While interpretability is a key challenge, the approach **lacks novelty** and does not sufficiently differentiate from existing **masking-based feature selection methods**. Claims And Evidence: The paper claims that P2P enhances interpretability by **selecting minimal, meaningful features**, but this is not strongly supported. Prior work, such as **InfoMask (Asgari et al., 2019)**, has already explored similar instance-wise masking techniques. The lack of direct comparisons makes it difficult to assess whether P2P truly provides an improvement. Methods And Evaluation Criteria: P2P relies on **SLIC Superpixels** to group features, but this may not always produce **semantically meaningful regions** for highly structured images. Additionally, the assumption that **removing features improves interpretability** is not always valid, as selective removal can distort decision boundaries. Theoretical Claims: The method does not introduce significant theoretical advancements beyond **existing feature selection approaches**. The assumption that **less information leads to better interpretability** should be tested more rigorously. Experimental Designs Or Analyses: The experiments **lack comparisons to strong baselines** like **InfoMask** or other **instance-wise feature selection methods**. Without these, it is unclear whether P2P offers meaningful improvements over prior techniques. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper is related to **interpretable machine learning and feature selection**. Essential References Not Discussed: **InfoMask (Asgari et al., 2019)** – A key prior work on instance-wise masking-based feature selection. Other Strengths And Weaknesses: No Other Comments Or Suggestions: _+_ Focuses on interpretability, an important ML challenge. _-_ Lacks novelty and questionable assumptions about feature removal. Questions For Authors: 1. How does P2P compare to InfoMask? 2. Why assume feature removal always improves interpretability? Have you tested cases where removal distorts model decisions? 3. How do superpixels impact interpretability? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer, we thank you for your comments and feedback. Please find below our answers to the open points. > How does P2P compare to InfoMask? We thank the reviewer for pointing out this baseline. InfoMask [1] is a nice information bottleneck-inspired approach that learns a masking on pixel level, regularized by a kl-divergence. Some notable differences are that 1. InfoMask operates directly on pixel level, and 2. InfoMask uses 'semi-hard' masking, where each masking probability is either 0, or in (0,1). We present the results of applying InfoMask in the same setup as P2P below. For our rebuttal, we focus on ImageNet-9, and COCO-10 over 3 seeds. We will include the results in the camera-ready version of the paper. | Dataset | Method | Accuracy (\%) | Localization (\%) | | ---------- | -------- | ------------- | ----------------- | | ImageNet-9 | InfoMask | 94.69 | 38.55 | | | P2P | 94.42 | 69.25 | | COCO-10 | InfoMask | 89.20 | 26.02 | | | P2P | 89.53 | 47.01 | Additionally, in https://anonymous.4open.science/r/P2P/figures_rebuttal/, we provide the Insertion curves. While InfoMask, similar to P2P, achieves near-optimal performance, we see that in the other two evaluation axes, P2P outperforms InfoMask. We argue that this difference arises because the mask of InfoMask is learned on pixel-basis, which encourages it to behave similarly to Blackbox Pixel. That is, in these complex datasets, InfoMask selects pixels nearly randomly, relying on the fact that a reduction in resolution does not effectively reduce information from the image. On the other hand, P2P selects only a sparse subset of features, that are relevant for the prediction. To support this point, we add a visualization of InfoMask's learned mask, for an example where it works well, in the repository above. Notice the reliance on the noisy sampling. > Why assume feature removal always improves interpretability? Have you tested cases where removal distorts model decisions? We agree that by removing features, we effectively reduce the high-dimensional decision boundary to a lower-dimensional one. This can lead to distortions due to the compression, whereby the target loss tries to minimize these distortions. We argue that a lower-dimensional decision boundary improves interpretability, as less complexity is generally easier to understand for humans. This is supported by research in cognitive psychology, such as [2,3]. Naturally, there are cases where the model decision changes, once P2P is used, compared to a black-box classifier. But even for some of these cases, it is understandable (thanks to P2P's masking), and quite frankly amusing, why the misclassification occurred. In https://anonymous.4open.science/r/P2P/figures_rebuttal/, we provide a selection of failure cases, where the added interpretability helps in understanding why the model failed. > How do superpixels impact interpretability? We argue that superpixels aid the model in defining a meaningful feature basis, where the selected features are human-interpretable, in contrast to learning a mask on pixel level. As part of this rebuttal, we have explored the effect of the specific superpixel algorithm on P2P's performance by replacing the SLIC superpixel algorithm with the Watershed superpixel algorithm, choosing the hyperparameters by ensuring similar number of segments. | Dataset | Method | Accuracy (\%) | Localization (\%) | | ---------- | ----------------- | ------------- | ----------------- | | ImageNet-9 | P2P (Watershed) | 93.95 | 67.70 | | | P2P (SLIC) | 94.42 | 69.25 | | COCO-10 | P2P (Watershed) | 90.05 | 46.95 | | | P2P (SLIC) | 89.53 | 47.01 | We see that there are no strong dependency on the specific choice of superpixel algorithm, further strengthening the generality of P2P. We will include these results in the camera-ready version of the paper. [1] Taghanaki, S.A. _et al._ (2019). InfoMask: Masked Variational Latent Representation to Localize Chest Disease. In: Shen, D., _et al._ Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol 11769. Springer, Cham. [2] Miller, George A. "The magical number seven, plus or minus two: Some limits on our capacity for processing information." _Psychological review_ 63.2 (1956): 81. [3] Sweller, John. "Cognitive load during problem solving: Effects on learning." _Cognitive science_ 12.2 (1988): 257-285.
Summary: The paper proposes a method that learns a masking function that is able to semantically separating important information from background noise. As a part of this, the authors introduce a dynamic threshold based on classification probabilities to determine the level of sparsity for the instance. The authors evaluate their method for classification and localization benchmarks, where the method perform well. Claims And Evidence: The claim of SLIC being the optimal superpoint algorithm is not further ablated. Therefore, it is difficult to assess of this selection is optimal. Methods And Evaluation Criteria: - The method section is well structured - The Equations before and after Eq 3 lack numbering - The intuition behind using a dynamic threshold is reasonable and a good design choice - Since the dymanic thresholding takes into account a certainty measure (class prediction softmax „probability“), a look at alternative uncertainty quantification methods from active learning could be interesting. Examples for epistemic uncertainty scores could be [1] and [2] - The choice of benchmark datasets is suitable for the task. Especially, IN-9 with background changes is an interesting choice [1] Gal, Yarin, and Zoubin Ghahramani. "Dropout as a bayesian approximation: Representing model uncertainty in deep learning." international conference on machine learning. PMLR, 2016. [2] Rahaman, Rahul. "Uncertainty quantification and deep ensembles." Advances in neural information processing systems 34 (2021): 20063-20075. Theoretical Claims: - The proof for semi-definiteness in the appendix is sound Experimental Designs Or Analyses: - Distortion (continuous values) vs. Removal (binary): The authors simply make the assumption that this design choice is the superior. I think an evaluation the difference in performance between both alternatives would be an important ablation - The method relies on SLIC for super pixels. Since this is an off-the-shelf component of the method, an ablation of using alternative super pixel algorithms and their impact on performance is essential - The method shows only minimal gains for classification over the next best competing method, COMET, and being outperformed on the largest, most extensive benchmark ImageNet-1K. The gains for localizations are larger though. Supplementary Material: The additional visualizations as well as the proof complement the main text nicely. Relation To Broader Scientific Literature: In contrast to the best competing method COMET, the proposed method focuses on relevant foreground regions. There is an inverted version of COMET though, which focuses on the background, but the proposed method mostly outperforms ist Essential References Not Discussed: None Other Strengths And Weaknesses: My comments fit well into the previous sections Other Comments Or Suggestions: - In the introduction, the acronym P2P is just introduced without explaining what it means. One can guess from the title, but it should be introduced anyhow Questions For Authors: Generally, the paper is well written and interesting. I would appreciate appreciate if the authors would further improve their work by: - Ablating the choice of super pixel algorithm, since its just assumed as a fixed component without investigating the effect of it - Explain why their method does not outperform COMET for the largest and most diverse benchmark ImageNet - Experiment with continuous masking vs. binary masking as opposed to simply assuming one is better than the other Since my experience with this subfield is limited, I will also take into account the comments from other reviewers for my final judgement Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and questions! Below is our point-by-point response. > Ablating the choice of super pixel algorithm, since its just assumed as a fixed component without investigating the effect of it We agree that the superpixels are a central part of our method, thus, warranting an ablation analysis. In some preliminary experiments, we observed that the choice of SLIC's hyperparameter do not have a strong effect on performance, as long as the number of segments chosen is reasonable (i.e. >20). As part of this rebuttal, we have explored the effect of the specific superpixel algorithm on P2P's performance by replacing the SLIC superpixel algorithm with the Watershed superpixel algorithm, choosing the hyperparameters by ensuring similar number of segments. | Dataset | Method | Accuracy (\%) | Localization (\%) | | ---------- | ----------------- | ------------- | ----------------- | | ImageNet-9 | P2P (Watershed) | 93.95 | 67.70 | | | P2P (SLIC) | 94.42 | 69.25 | | COCO-10 | P2P (Watershed) | 90.05 | 46.95 | | | P2P (SLIC) | 89.53 | 47.01 | We see that there are no strong dependency on the specific choice of superpixel algorithm, further strengthening the generality of P2P. We will include these results in the camera-ready version of the paper. > Explain why their method does not outperform COMET for the largest and most diverse benchmark ImageNet We thank the reviewer for posing this question, as it allows us to improve our explanation of why we believe that COMET does not faithfully make a prediction based on the highlighted pixels, thus rendering its interpretability questionable: COMET learns a continuous-valued mask, thereby highlighting some pixels, while darkening others. Note that darkening an image does not destroy any numerical information that the classifier can use. In order to be an inherently interpretable instance-wise feature selection model, we argue that COMET's classification should rely on the highlighted pixels. In our work, we investigate whether this holds in two ways. 1. By evaluating Insertion Fidelity in Figure 3 of the submission: We start from a black image and iteratively add the (according to the mask) most important pixels. If the model relied on these pixels, we would expect that it's predictive performance quickly improves. Notice that this is not the case, as COMET is one of the worst-performing methods on this metric. 2. By evaluation of COMET^-1: To check whether COMET bases its prediction only on highlighted pixels, we introduce COMET^-1. Here, COMET highlights the unimportant pixels and then makes a prediction based on this masked image. If COMET would base it's prediction only the highlighted features, then, this variant should perform very badly, as it highlights unimportant pixels. However, COMET^-1 has a high accuracy. This indicates that COMET does not make a prediction based on the highlighted features only, but also uses the non-highlighted pixels. Combined, this suggests that COMET's classification does not rely on only the highlighted areas of the image, but on the full image. This indicates that COMET is not an inherently interpretable feature selector. To summarize and answer your question: COMET performs well on ImageNet because in contrast to P2P, it does not select a subset of features but still uses the full image to make a prediction. We will improve our explanation in the camera-ready version of the paper. > Experiment with continuous masking vs. binary masking as opposed to simply assuming one is better than the other As we argued and seen previously for COMET, we strongly believe that continuous masking does not effectively remove information. Thus, any model with continuous masking is, in our eyes, not a real instance-wise feature selector. Continuous-valued masking has a high risk of spurious correlations[1], where the model uses these darker pixels, and the user does not realize this. As such, we believe continuous masking is a risk for interpretability, where the goal is for the human user to understand the model's decision making. We want to avoid the possibility of P2P having this non-interpretable hard-to-detect behavior. [1] Geirhos, Robert, et al. "Shortcut learning in deep neural networks." _Nature Machine Intelligence_ 2.11 (2020): 665-673. --- Rebuttal Comment 1.1: Comment: The authors' rebuttal has addressed my concerns well! Taking into account to overall positive sentiment from other reviewers, as well as the convincing response, I will increase my score
Summary: This paper presents P2P (Pixels to Perception), an inherently interpretable image-classification model that performs instance-wise feature selection using grouped feature sparsification at the superpixel level rather than individual pixels. The authors argue that sparsifying at the pixel level can lead to non-human-interpretable explanations, whereas P2P enforces structured sparsification in perceptually meaningful regions. Key Contributions: - Superpixel-Based Feature Selection: Instead of selecting individual pixels, P2P groups pixels into superpixels (e.g., using SLIC) and learns a binary mask at the region level. - Instance-Wise Adaptability: The model dynamically determines the sparsity level per instance rather than applying a fixed threshold across all images. - Part-Object Relationship Modeling: A logit-normal distribution models relationships between different superpixels to better capture object structures. - Faithful and Interpretable Predictions: The model ensures that only the selected regions contribute to classification, avoiding reliance on dimmed but still informative pixels (as seen in COMET). Key Results: - Comparable accuracy to black-box models while removing up to 80% of the image content. - Stronger object localization than existing methods (REAL-X, RB-AEM, COMET, B-cos). - High faithfulness as demonstrated by insertion/deletion tests. - Evaluations on CIFAR-10, ImageNet, COCO-10, and BAM datasets. The proposed approach ensures more human-understandable predictions by selecting perceptually meaningful regions instead of arbitrary pixels. Claims And Evidence: The paper makes several claims, all of which are well-supported: Claim: Superpixel-based feature selection is more interpretable than pixel-wise selection. Evidence: P2P’s masks align with object boundaries better than pixel-level baselines. This is demonstrated via visualization and quantitative comparisons against segmentation ground truth. Claim: P2P achieves high classification accuracy despite removing large portions of the image. Evidence: P2P maintains accuracy close to black-box models, outperforming sparsity-based baselines like DiET and RB-AEM. Claim: P2P produces faithful feature selections. Evidence: The insertion/deletion experiments confirm that only the revealed pixels influence predictions, unlike COMET, where inverted masks still perform well. Claim: P2P dynamically adapts the level of sparsity per instance. Evidence: The ablation study shows that fixed sparsity thresholds reduce accuracy, whereas dynamic sparsity maintains performance with better localization. Overall, the paper provides clear empirical evidence for its claims using well-structured experiments. Methods And Evaluation Criteria: Methods: - Uses SLIC superpixels to partition the image into perceptual regions. - Learns a logit-normal probability distribution to model relationships among superpixels. - Implements dynamic thresholding to control sparsity per instance. Evaluation Criteria: - Classification Accuracy – How well the model classifies images after feature selection. - Localization – The overlap of selected regions with ground-truth object masks. - Faithfulness – Whether removing the selected regions prevents correct classification. The evaluation metrics align well with the problem, and results are reported across multiple datasets. Computational overhead is not analyzed in-depth. Theoretical Claims: The paper includes a mathematical formulation of instance-wise grouped feature selection and a derivation proving that the logit-normal covariance matrix is positive semi-definite. The proof (Appendix A) is correct and straightforward, ensuring valid covariance modeling. While the derivations are correct, more discussion on the computational efficiency of logit-normal covariance modeling would be beneficial. Experimental Designs Or Analyses: The experiments are rigorous and well-structured: - Datasets: CIFAR-10, ImageNet, COCO-10, BAM (semi-synthetic). - Baselines: COMET, REAL-X, RB-AEM, DiET, B-cos. - Metrics: Classification accuracy, localization overlap, insertion/deletion for faithfulness. - Ablation studies: Fixed vs. dynamic sparsity, alternative selection models. While accuracy and faithfulness are well-evaluated, the computational efficiency of P2P compared to baselines is not deeply analyzed. Supplementary Material: The appendix provides additional experiments, including extra datasets (BAM), localization results, and proofs. The code is anonymized and made available for reproducibility. The supplementary material is comprehensive and supports the main claims well. Relation To Broader Scientific Literature: The paper is well-grounded in the literature, drawing connections to: - Feature selection (LASSO, instance-wise sparsity methods like REAL-X, DiET). - Explainability in deep learning (Grad-CAM, concept-based models). - Human perception studies (Gestalt principles, Biederman’s recognition-by-components theory). The paper does not compare against vision transformers (ViTs), which are increasingly used for interpretability. Essential References Not Discussed: The references are generally comprehensive, but a few areas could be expanded: - Vision Transformer (ViT)-based explainability approaches. - Graph-based feature selection methods, which also model part-whole relationships. Other Strengths And Weaknesses: Strengths - Perceptually meaningful selection – Superpixel-level masks align better with human understanding. - Faithfulness – The model truly depends on the selected regions. - Dynamic thresholding – Adapts sparsity to instance complexity. - Strong empirical validation – Evaluated on multiple datasets with rigorous baselines. Weaknesses - Computational complexity – Training/inference efficiency is not analyzed. - Superpixel quality dependency – Performance might degrade with suboptimal superpixel partitioning. - Limited discussion of transformer-based explainability – No comparison to ViTs. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, Thank you for your thorough review and positive feedback on our work! As there are no questions, we will keep our response brief. P2P is computationally efficient, as the computational overhead of the logit-normal covariance modeling is negligible compared to the rest of the architecture. As such, P2P is as fast as Real-X and faster than COMET, which has two classifiers. The computation of the superpixels is done on CPU during dataloading and as such does not introduce a computational overhead. For an ablation on the superpixel algorithm used, we refer to our rebuttal to Reviewer kwsZ or Reviewer px8r, showing that P2P does not rely on the specific choice of superpixel algorithm. Please let us know if any questions pop up that you would like us to address.
Summary: The goal of this method is to improve the interpretability of machine learning models. This work proposes a new approach to inherent interpretability by sparsifying the input images for model predictions. To achieve this, the method masks semantically defined pixel regions instead of individual pixels and employs dynamic thresholding to determine the necessary level of sparsity during inference. It is evaluated across multiple datasets, successfully producing human-understandable predictions while retaining the predictive performance of blackbox models. Claims And Evidence: The paper claims the following: 1. Novel semantic region-based approach: The paper builds on the COMET approach by using regions instead of pixels. I believe this claim is valid, as it is a novel approach in this field. 2. Dynamic thresholding that adjusts sparsity: They propose a method for dynamic thresholding by uniformly sampling threshold parameters during training and then using Equation 4 as a threshold parameter for an instance during inference. 3. Thorough empirical assessment: The paper includes extensive experiments. They compare closely to the blackbox approach, as shown in Table 1, and demonstrate effective localization, as presented in Table 2. The study also features robust ablation studies and visual comparisons. Methods And Evaluation Criteria: Yes, I believe that both the method and the evaluation criteria make sense overall. Theoretical Claims: I don't think there are any formal theoretical claims in this paper. Experimental Designs Or Analyses: Yes, I believe that the experimental design is well-structured and effectively evaluates the paper. It compares with blackbox models and other relevant methods, and it includes experiments on localization, ablation studies, and visual comparisons. Supplementary Material: Yes, I reviewed all the supplementary material where I needed clarification. Relation To Broader Scientific Literature: The paper's key contribution is closely related to inherent interpretability literature. They build on methods like COMET. Essential References Not Discussed: I'm not very familiar with this literature in depth. However, based on the related works section, it seems the paper provides a broad introduction to the topic. Other Strengths And Weaknesses: I personally find the method easy to understand and well written. However, I believe that Figure 2 could be improved with better captions for the freeze and unfreeze parts. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their review and for their positive feedback. As there are no open questions, we keep our rebuttal brief. We thank the reviewer for their comment on Figure 2 and will improve the clarity of the Figure in the camera-ready version of the paper. Please let us know if there are any other questions that could further convince the reviewer of our work, and we will be happy to address them.
null
null
null
null
null
null
O-MAPL: Offline Multi-agent Preference Learning
Accept (poster)
Summary: This paper studies the problem of cooperative multi-agent reinforcement learning from preference data. The authors formulate the problem as cooperative Markov game with a global reward function where the goal of the agents is to learn optimal policies given offline pairs of trajectories with corresponding preferences. The authors make use of a one-to-one correspondence between the inverse Bellman operator of KL-regularized Q-functions and the global reward function to formulate the optimal policy objective--thereby reducing the problem to learning the optimal Q-functions. The combinatorial computational complexity due to the presence of many agents is tackled via value decomposition and mixing through linear (learnable) maps. Moreover, the problem of maintaining a local-to-global consistency while simultaneously having well-defined policies over probability simplexes is tackled through local weighted behavior cloning techniques. The authors also theoretically demonstrate the correspondence of global optimal policies with the learned local policies and the validity of the returned value functions. Extensive experimental results showcase the benefit of using their proposed method, O-MAPL. Claims And Evidence: The convexity of the parameter space over the value functions is theoretically proven. Moreover, the consistency between optimal global policies and optimal local policies, in the WBC sense, is also theoretically shown. The validity of O-MAPL algorithm is demonstrated by extensive experiments on several MARL benchmarks against several baselines. Methods And Evaluation Criteria: The proposed methods and the evaluation criteria are reasonable for the problem at hand. Theoretical Claims: I skimmed through the proofs of the main results in the appendix, and, they seem overall fine. No blunders or unexpected claims. However, I haven't checked in detail. Experimental Designs Or Analyses: I have not checked the validity of the experimental results. Supplementary Material: I skimmed through several proofs in the appendix, but have not checked in detail. Relation To Broader Scientific Literature: There are only two papers that discuss MARL from preference data, namely, (Kang et al., 2024) and (Zhang et al., 2024), both of which go through the reward learning phase and then perform policy optimization on top. The approach of the current paper is novel, in that regularized logistic regression is used to obtain estimates of the inverse Bellman operator of the Q-function, which correspond to the logits of the optimal policy directly. Such an approach bypasses the policy optimization using estimated reward functions, which is where it diverges from previous work. Essential References Not Discussed: To the best of my knowledge, there aren't any essential references missing. The related work section seems adequately comprehensive. Other Strengths And Weaknesses: 1. I really enjoyed reading the paper as it is well-written, clear and coherent. 2. The proposed method has several novel components that are brought together: (i) the reduction of the reward learning problem to the Q-space; (ii) the usage of linear mixing networks for the value decomposition phase; (iii) the usage of local weighted behavior cloning to align local policies to learned global value functions. 3. The utilized rationale behind O-MAPL is justified from theoretical results. 4. The experimental validation of O-MAPL is extensive, which make the algorithm quite practical. I would have liked to see some simple convergence result which sheds some light into how close the output global policy is to the ground-truth global optimal policy with respect to the global reward function, in terms of problem-specific parameters such as data size, data coverage, policy class and the used mixing networks and WBC technique. Other Comments Or Suggestions: N/A Questions For Authors: 1. Why do you switch from discounted MDPs to finite-horizon MDPs when you discuss preference-based learning in Section 4.1? Isn't it better to have a unified framework? 2. Related to my comment on the convergence, is it possible to add some convergence guarantees of O-MAPL? There is an alternating optimization problem being solved for both the optimal Q-function and value function. Due to convexity/concavity, I expect it to converge in finite time. However, what is not clear to me is how the presence of WBC affects convergence. 3. How does data coverage affect the performance of O-MAPL, that is, have you tested it in datasets where not enough "good" trajectories are present? How does it affect the performance? 4. Can you comment on the presence of the additional term in Proposition 4.5? The result indicates that the value function is not "exactly" expressed as log-sum-exp of the Q-function, but there is an additional bias. Why is that? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for carefully reading our paper, offering a positive evaluation, and providing valuable and insightful comments. To address your questions, we have conducted additional experiments, which are detailed in the **PDF** available via this anonymous link: https://1drv.ms/b/s!AgChHLa7t5Bza5QJpMfi7YJX6PI --- Below, we address each of your comments and questions: > Why do you switch from discounted MDPs to finite-horizon MDPs when you discuss preference-based learning in Section 4.1? Isn't it better to have a unified framework? We sincerely thank the reviewer for this insightful comment. We initially adopted the discounted infinite-horizon MDP framework as it is widely utilized in RL and MARL literature due to its general applicability. However, for our objective of the preference-based learning, we chose to exclude the discount factor because the preference data commonly consists of partial or sub-trajectory information rather than complete trajectories. Therefore, our algorithm was developed around an undiscounted sum of rewards, which aligns closely with recent preference-based approaches in the single-agent setting. We will clearly articulate and emphasize this choice in the revised manuscript. > Related to my comment on the convergence, is it possible to add some convergence guarantees of O-MAPL? There is an alternating optimization problem being solved for both the optimal Q-function and value function. Due to convexity/concavity, I expect it to converge in finite time. We sincerely thank the reviewer for this insightful comment. Our alternating optimization approach is based on reformulating the primary learning problem as an equivalent bilevel optimization problem, as shown below: $max_q L(q,v,w)$ s.t. $v= \arg\min J(v)$ In this formulation, $L(q,v,w)$ is concave in $q$ and $J(v)$ and is convex in $v$. This convex-concave structure allows the alternating optimization process — where $q$ and $v$ are updated aternatively — to converge to a stationary point. This optimization method is widely adopted and well-supported in the literature on *bilevel optimization*. We will clarify this point and cite relevant works on bilevel optimization to support our approach. Thank you for bringing this to our attention. > However, what is not clear to me is how the presence of WBC affects convergence. We thank the reviewer for this insightful question. To clarify, the convergence of the Q and V functions arises from solving the bilevel optimization problem using an alternating procedure. The convergence of the WBC component, on the other hand, is theoretically supported by Theorem 4.3 under the assumption that Q and V are fixed. In practice, we train the WBC component simultaneously with Q and V updates to improve computational efficiency. Empirically, one can show that as Q and V converge, the WBC also reliably converges to the optimal policy given a sufficient number of update steps. We will better clarify this point in the revised manuscript. > How does data coverage affect the performance of O-MAPL, that is, have you tested it in datasets where not enough "good" trajectories are present? How does it affect the performance? We thank the reviewer for this insightful question. To address this point, we conducted additional experiments to evaluate the effect of data quality and coverage on O-MAPL’s performance. Specifically, we varied the number of preference samples, and examined cases where LLMs provided lower-quality preference labels. The additional results are reported in **Tables 1, 2 and 4** in the **PDF**. These additional results show that O-MAPL remains robust across a range of dataset qualities, although its performance degrades gracefully as the quality of preference data decreases. We will include these additional results in the revised manuscript to highlight this robustness. > Can you comment on the presence of the additional term in Proposition 4.5? The result indicates that the value function is not "exactly" expressed as log-sum-exp of the Q-function, but there is an additional bias. Why is that? We thank the reviewer for this insightful question. In Proposition 4.5, we demonstrate that the local value function *cannot be exactly expressed* as the log-sum-exp of the local function. This highlights a key limitation of prior MARL approaches that rely solely on local functions to compute local policies or values. The fundamental reason is the interdependence among agents in cooperative settings — local policies are influenced by the joint behavior of other agents. As such, expressing them purely in terms of local and functions leads to an approximation error. We will clarify this point more explicitly in the revised manuscript. --- *We hope the above responses satisfactorily address your questions and concerns. If you have any further questions or comments, we would be happy to provide additional clarification.*
Summary: This paper introduces O-MAPL, an end-to-end preference-based reinforcement learning framework for cooperative multi-agent systems, addressing the challenge of inferring reward functions from demonstrations in complex MARL settings. Prior methods often separate reward learning and policy optimization, leading to instability. The authors propose a novel approach that bypasses explicit reward modeling by leveraging the relationship between rewards and soft Q-functions in MaxEnt RL. Claims And Evidence: While the paper argues that prior methods suffer from instability due to phased training, no empirical evidence is provided to demonstrate that O-MAPL actually achieves greater stability. For instance, there are no comparisons of training curves, convergence rates, or sensitivity analyses against baselines like IIPL or IPL-VDN that decouple reward and policy learning. Without such comparisons, the claim remains speculative. The experiments use LLM-generated preference data for training O-MAPL but do not clarify whether baseline methods (e.g., IIPL, BC) were evaluated under the same data conditions. Performance gains could stem from the quality or scale of LLM-generated data rather than the algorithm itself. Additionally, there is no ablation study comparing rule-based vs. LLM-generated preferences for O-MAPL, making it impossible to disentangle the contribution of the framework from the data source. Methods And Evaluation Criteria: The methods are well aligned with the challenges of multi-agent preference-based RL. By integrating MaxEnt RL with value decomposition under CTDE, O-MAPL addresses the instability of prior two-phase approaches while ensuring global-local policy consistency. The use of linear mixing networks and weighted BC for policy extraction is pragmatic for offline settings, balancing stability and scalability. Evaluation on SMAC (discrete) and MAMuJoCo (continuous) benchmarks covers diverse MARL scenarios, and the inclusion of LLM-generated preference data reflects real-world applicability. While human-labeled datasets are absent (a common MARL limitation), the rule-based/LLM-generated data strategy is reasonable and reproducible. Baselines are relevant, though broader comparisons (e.g., with recent offline MARL methods) could strengthen claims. Overall, the design and evaluation credibly validate the framework’s efficacy. Theoretical Claims: All the proofs are mathematically sound under idealized assumptions Experimental Designs Or Analyses: The experiments use LLM-generated preference data for training O-MAPL but do not clarify whether baseline methods (e.g., IIPL, BC) were evaluated under the same data conditions. Performance gains could stem from the quality or scale of LLM-generated data rather than the algorithm itself. Additionally, there is no ablation study comparing rule-based vs. LLM-generated preferences for O-MAPL, making it impossible to disentangle the contribution of the framework from the data source. Supplementary Material: The supplementary material was reviewed, focusing on the proofs (Appendix A), implementation details (Appendix B), and additional experiments (Appendix C). The proofs revealed a mismatch between the assumption of fixed local Q-functions in the convexity analysis and the actual joint optimization described in the main text. Implementation details lacked critical specifics on preference dataset generation (e.g., LLM hyperparameters) and omitted pseudocode, hindering reproducibility. Additional experiments provided limited ablation studies (e.g., linear mixing tested only on SMAC, not MAMuJoCo) and did not address computational costs or robustness to preference noise. These gaps weaken the support for key claims around stability and scalability. Relation To Broader Scientific Literature: The paper’s key contributions are positioned within the broader MARL literature by advancing offline preference learning and stability in decentralized value decomposition, addressing gaps in prior work while building on established frameworks. Essential References Not Discussed: In my point of view, the related works that are essential to understanding the (context for) key contributions of the paper were all currently cited/discussed in the paper. Other Strengths And Weaknesses: **Strengths** 1. The paper introduces a novel end-to-end preference-based learning framework for multi-agent RL that bypasses explicit reward modeling by leveraging the intrinsic relationship between reward functions and soft Q-functions. This approach is original, as it creatively combines MaxEnt RL principles with a multi-agent value decomposition strategy, addressing the instability of prior two-phase methods while ensuring global-local policy consistency through theoretical guarantees. 2. The authors rigorously evaluate their method across diverse benchmarks (SMAC, MAMuJoCo) using both rule-based and LLM-generated preference data. The results demonstrate consistent superiority over baselines, particularly highlighting the effectiveness of LLM-based preference data, which opens new avenues for cost-effective training in complex multi-agent systems. **Weaknesses** 1. The work focuses exclusively on cooperative settings, leaving mixed cooperative-competitive environments unexplored. This restricts the framework’s applicability to real-world scenarios where adversarial interactions or heterogeneous objectives are common, and the proposed value decomposition may not generalize. 2. While leveraging LLMs mitigates data generation costs, the method still requires substantial trajectory pairs for training. The paper does not address sample efficiency in low-data regimes or human-in-the-loop settings, limiting its practicality where preference queries are expensive or scarce. Other Comments Or Suggestions: None. Questions For Authors: 1. The paper emphasizes the use of linear mixing networks to preserve convexity (Prop. 4.1–4.2) but acknowledges that non-linear mixing (e.g., two-layer networks) is common in prior MARL works. However, the experiments do not include comparisons with non-linear mixing variants. Could the authors provide ablation studies or empirical evidence showing that linear mixing indeed outperforms non-linear alternatives in their framework? 2. The LLM-based preference generation is a novel contribution, but the paper does not analyze the quality of LLM-generated labels (e.g., alignment with ground-truth human preferences or rule-based labels). How sensitive is O-MAPL to potential inaccuracies or biases in LLM-generated preferences? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's detailed feedback and constructive questions. To address your questions, we have conducted additional experiments, which are detailed in the **PDF** available via this anonymous link: https://1drv.ms/b/s!AgChHLa7t5Bza5QJpMfi7YJX6PI ---- Below, we respond point-by-point to your valuable suggestions and concerns. > There are no comparisons of training curves, convergence rates, or sensitivity analyses against baselines like IIPL or IPL-VDN … We thank the reviewer for highlighting this important point. To clarify, the curves presented in our paper (Figure 1 and additional figures in the appendix) represent **evaluations during training progress**. Specifically, these curves illustrate the evolution of the value function during training, comparing different approaches (O-MAPL, IIPL, IPL-VDN). > The experiments use LLM-generated preference data for training O-MAPL but do not clarify whether baseline methods (e.g., IIPL, BC) were evaluated under the same data conditions. We thank the reviewer for the insightful comment. We would like to clarify that all baselines have been evaluated using the **same preference data**. We will explicitly state this point in the revised manuscript. > Additionally, there is no ablation study comparing rule-based vs. LLM-generated preferences for O-MAPL We appreciate the reviewer's suggestion. To clarify, we already have Table 1 and several figures in the appendix which show a clear comparative analysis between **rule-based and LLM-based preference data** across all methods considered. > Implementation details lacked critical specifics on preference dataset generation (e.g., LLM hyperparameters) and omitted pseudocode, hindering reproducibility. We thank the reviewer for this feedback. To clarify, we have already included several details in the appendix on our data generation methods (rule-based and LLM-based), including hyperparameters and specific prompts used to obtain LLM-generated preferences. We believe this information is sufficient for reproducibility. Regarding the pseudocode, we agree that our algorithm description is currently quite brief. We will include a more detailed and structured presentation of the algorithm in the updated paper to improve clarity and reproducibility. > Additional experiments provided limited ablation studies (e.g., linear mixing tested only on SMAC, not MAMuJoCo) and did not address computational costs or robustness to preference noise We thank the reviewer for raising these important points. To clarify, our experiments did apply linear mixing and our proposed value decomposition methods to **both SMAC and MAMuJoCo tasks**. Regarding computational costs, we have provided some details in section B.2. If the reviewer could specify which particular aspects of preference noise we have overlooked, we would be happy to provide further clarification and analysis. > The work focuses exclusively on cooperative settings, leaving mixed cooperative-competitive environments unexplored. … We thank the reviewer for highlighting this important consideration. Our current work focuses specifically on cooperative settings. As noted in our conclusion, addressing mixed cooperative-competitive scenarios would indeed necessitate significantly different methodologies, which lie beyond the current scope. We agree that exploring these mixed scenarios is a valuable direction for future research. > The experiments do not include comparisons with non-linear mixing variants. … We thank the reviewer for this insightful comment. We focused our experiments on the linear mixing setting. This decision was based on previous studies indicating that two-layer mixing networks perform significantly worse than one-layer linear mixing networks especially in offline settings with limited data [1] [1] Bui et al. ComaDICE: Offline cooperative multi-agent reinforcement learning with stationary distribution shift regularization. In ICLR 2025. To address your question, we have conducted additional comparative experiments between linear and nonlinear mixing structures. The results, shown in **Table 3** in the **PDF**, clearly demonstrate that the linear mixing structure outperforms the two-layer structure. > The paper does not analyze the quality of LLM-generated labels.... How sensitive is O-MAPL to potential inaccuracies or biases in LLM-generated preferences? We thank the reviewer for the insightful comment. To address this concern, we conducted additional experiments comparing the performance of our method using two versions of ChatGPT: **4o and 4o-mini** (the latter being considered weaker in terms of long-term reasoning capabilities). The comparison results, reported in **Table 4** in the **PDF**. --- *We hope that the above responses satisfactorily address your questions and concerns. If you have further questions, we are happy to provide additional clarification.*
Summary: This paper proposes a multi-agent offline reinforcement learning algorithm named O-MAPL, addressing the problem of directly training multi-agent policies using human preference data under offline data conditions. The authors highlight limitations of traditional two-stage methods (first learning a reward model, then learning a policy), including: 1. The requirement for large preference datasets covering state and action spaces. 2. Performance degradation due to misalignment between the two stages. To address these issues, they propose an end-to-end framework that directly learns policies from preference data in an offline setting. Specifically, leveraging the correspondence between soft Q-functions and reward functions under the Maximum Entropy Reinforcement Learning (MaxEnt RL) framework, they introduce a method to train Q-functions (i.e., agent policies) directly using preference data. Key technical contributions include: 1. A mixing network for value factorization to enable multi-agent coordination. 2. A linear value decomposition network designed to align with preference learning objectives and the IGM (Individual-Global-Maximization) principle, ensuring training stability and global-local consistency. Experimental results demonstrate that O-MAPL outperforms existing methods across benchmark environments such as SMAC (StarCraft Multi-Agent Challenge) and MaMuJoCo (Multi-Agent MuJoCo). Claims And Evidence: The effectiveness of the proposed O-MAPL algorithm is sufficiently supported by experiments. Validity of the end-to-end learning framework: Theoretical analysis and experimental results (on SMAC and MaMuJoCo benchmarks) demonstrate that learning policies directly from preference data outperforms traditional two-stage methods (baselines include single-agent methods extended to multi-agent settings, e.g., IPPL, SL-MARL, and methods similar to this work but lacking the proposed value decomposition architecture, e.g., IPS-VDN). Importance of value decomposition: Theoretical analysis of the convexity of single-layer mixing networks and global-local consistency. Methods And Evaluation Criteria: The proposed approach of directly learning multi-agent policies from preference data in an offline setting is meaningful for multi-agent preference learning. In multi-agent scenarios, obtaining accurate reward signals is challenging, and preference learning offers a promising solution. However, preference learning in MARL remains underexplored and faces issues such as training efficiency and human involvement costs. Existing two-stage methods (first learning a reward function, then training policies) require extensive preference data and suffer from misalignment between stages. Thus, directly training policies from preference data is a significant contribution to multi-agent preference learning. The use of SMAC and MaMuJoCo for evaluation is appropriate, as these environments are widely adopted for benchmarking multi-agent reinforcement learning methods. Training with both rule-based and LLM-generated preference data is reasonable, as these methods generate preference data of varying quality, enabling comprehensive algorithm evaluation. Theoretical Claims: The theoretical analysis in Section 4 includes: 1. Theorem 4.1 (Convexity Analysis): The loss function under a single-layer linear mixing network is convex. 2. Proposition 4.2 (Non-convexity in Nonlinear Networks): Two-layer mixing networks disrupt convexity. 3. Proposition 4.3 (Global-Local Consistency): Local policy optimization aligns with global objectives. These theoretical analyses are clear and free of obvious errors. Experimental Designs Or Analyses: 1. Experiments are conducted on widely used SMAC and MaMuJoCo environments, ensuring representativeness. 2. Training with rule-based and LLM-generated preference data covers diverse feedback quality. 3. Results are statistically significant, with mean values and standard deviations provided. Supplementary Material: The appendix includes detailed proofs for theorems in Section 4, dataset construction details, and additional experimental results, which are comprehensive. Relation To Broader Scientific Literature: The proposed method is closely related to existing literature: 1. It extends the MaxEnt RL framework to multi-agent offline preference learning, enabling end-to-end training. 2. It aligns with preference learning research by leveraging preference data for multi-agent policy training. Essential References Not Discussed: The related work section adequately covers offline RL and preference learning. This work builds upon multi-agent offline RL and integrates preference data for policy training. Other Strengths And Weaknesses: Strengths: 1. Practical relevance: The offline learning paradigm avoids costly trial-and-error interactions. 2. Novelty: A new paradigm for multi-agent preference learning that mitigates defects of two-stage methods. 3. Rigorous theoretical analysis and experiments. Weaknesses: 1. No evaluation under limited preference data, a common real-world constraint. 2. Insufficient demonstration of the value decomposition module’s specific contributions. Other Comments Or Suggestions: Suggest adding implementation details in the appendix to facilitate reproducibility. Questions For Authors: 1. How does the method perform when preference feedback is limited? 2. Can the approach be extended to mixed cooperative-competitive scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive evaluation and insightful feedback. Below, we provide detailed responses to your comments. To address your questions, we have conducted additional experiments, which are detailed in the **PDF** available via this anonymous link: https://1drv.ms/b/s!AgChHLa7t5Bza5QJpMfi7YJX6PI --- > How does the method perform when preference feedback is limited? We thank the reviewer for the insightful comment. To address your question, we conducted additional experiments in which we systematically reduced the amount of preference data to 75%, 50%, and 25% of what was used in our main experiments. The comparison results — reported in Tables 1 and 2 in the **PDF** — demonstrate how our method and several important baselines perform under reduced preference feedback. As expected, the overall performance of all methods degraded with less data; however, O-MAPL consistently outperformed the other baselines across all settings, showing robustness to limited preference feedback. > Insufficient demonstration of the value decomposition module’s specific contributions. We thank the reviewer for the comment. To clarify, we have included several variants of our method to evaluate the specific contributions of the value decomposition module. Specifically, IPL-VDN is a variant where we fix the weights in the value decomposition module and do not learn them, while IIPL represents a variant in which each local value function is learned independently per agent, without being aggregated through our mixing architecture. In addition, we have conducted additional experiments comparing the performance of *1-layer* and *2-layer* mixing networks in our value decomposition approach. The comparison results are reported in Table 3 of the **PDF**. > Can the approach be extended to mixed cooperative-competitive scenarios? We thank the reviewer for the question. While the value decomposition approach has potential applicability to mixed cooperative-competitive scenarios, it would require substantial adaptations to handle such complexities effectively. We consider this extension beyond the current scope but agree it is a valuable direction for future exploration. --- *We hope that the above responses satisfactorily address your questions and concerns. If you have further questions, we are happy to provide additional clarification.*
Summary: The paper introduces O-MAPL, a novel framework for multi-agent reinforcement learning that leverages human preference data to train cooperative agents without explicit reward modeling. Traditional MARL methods often require separate stages for reward learning and policy optimization, leading to instability and inefficiency. O-MAPL addresses this by directly learning the soft Q-function from pairwise trajectory preferences, using a centralized training with decentralized execution (CTDE) paradigm. The approach incorporates a value decomposition strategy to ensure global-local consistency and convexity in the learning objective, enabling stable and efficient policy training. The authors conduct extensive experiments on two benchmarks. Results show that O-MAPL consistently outperforms existing methods across various tasks, demonstrating its effectiveness in complex multi-agent environments. The paper also highlights the potential of using LLMs for generating rich and cost-effective preference data, which significantly improves policy learning. Key contributions include: 1. An end-to-end preference-based learning framework for MARL that avoids explicit reward modeling. 2. A value factorization method that ensures global-local consistency and convexity in the learning objective. 3. Extensive empirical validation showing superior performance over existing methods. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: This paper considers a natural extension from existing implicit-reward offline RL method to MARL setting. Essential References Not Discussed: No Other Strengths And Weaknesses: I have one concern about the correctness of the derivation of the implicit-reward method. In section 4.2, the authors proposed to design a training objective function $L(q,v,\omega)$ to avoid explicit reward modeling. However, this training objective relies on the inverse soft Bellman-operator. In $L(q,v,\omega)$, I did not see how to guarantee the inverse soft Bellman-operator which requires the soft Q function to be a fixed point of a soft Bellman-operator. Therefore, in the training pipeline, if the updated Q function can not be guaranteed to be a fixed point of a soft Bellman-operator, then the algorithm can not provide a convergence guarantee to an optimal policy. In practice, this issue usually lead to instability or sub-optimal solutions of the implicit-reward methods [1, 2]. [1] Garg, Divyansh, et al. "Iq-learn: Inverse soft-q learning for imitation." Advances in Neural Information Processing Systems 34 (2021): 4028-4039. [2] Rafailov, Rafael, et al. "Direct preference optimization: Your language model is secretly a reward model." Advances in Neural Information Processing Systems 36 (2023): 53728-53741. Other Comments Or Suggestions: I think how to guarantee the stability and convergence of the implicit-reward method can be critical to the proposed method. Questions For Authors: it will be helpful if the authors show that the Q function updated in their designed training pipeline can be a fixed point to one soft Bellman operator. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reading our paper and providing valuable feedback. Below, we address your concern in detail. > I have one concern about the correctness of the derivation of the implicit-reward method. In section 4.2, the authors proposed to design a training objective function to avoid explicit reward modeling. However, this training objective relies on the inverse soft Bellman-operator. In L(), I did not see how to guarantee the inverse soft Bellman-operator which requires the soft Q function to be a fixed point of a soft Bellman-operator. Therefore, in the training pipeline, if the updated Q function can not be guaranteed to be a fixed point of a soft Bellman-operator, then the algorithm can not provide a convergence guarantee to an optimal policy. In practice, this issue usually lead to instability or sub-optimal solutions of the implicit-reward methods [1, 2]. We thank the reviewer for the comment. To clarify, we can formally prove that the trained $Q_{\text{tot}}$ resulting from our preference-based training objective is guaranteed to be a fixed-point solution of the soft Bellman equation. The proof closely follows the structure of prior work such as IQ-Learn, and we provide a version tailored to our formulation and notation in the following. **PROOF**: With the definition of $\mathcal{T}^* Q_{\text{tot}}$ as $(\mathcal{T}^* Q_{\text{tot}})(s, a)$ = $Q_{\text{tot}}(s, a)$ - $\gamma$ $E_{s'}$ $[V_{\text{tot}}(s')]$ we can rewrite the inverse Bellman equation as follows: $Q_{\text{tot}}(s, a)$ = $(\mathcal{T}^* Q_{\text{tot}})(s, a)$ + $\gamma$ $E_{s'}$ $\left[ V_{\text{tot}}(s') \right]$, which implies: $Q_{\text{tot}}(s, a)$ = $(B_r^*$ $Q_{\text{tot}})(s, a)$, where the reward function is defined as $r(s, a)$ = $(\mathcal{T}^* Q_{\text{tot}})(s, a)$, and $(B_r^*$ $Q_{\text{tot}})(s, a)$ is the soft Bellman backup operator defined in our paper. This result shows that $Q_{\text{tot}}$ satisfies the soft Bellman equation. Therefore, if we train the preference-based objective using $(\mathcal{T}^* Q_{\text{tot}})(s, a)$ — i.e., via the inverse soft Bellman equation — then the resulting $Q_{\text{tot}}$ is guaranteed to be a **fixed-point solution** to the soft Bellman equation with reward $r(s, a)$ = $(\mathcal{T}^*$ $Q_{\text{tot}})(s, a)$. **EndProof** We will incorporate the above discussion into the revised paper to clarify the fixed-point convergence of our method. --- *We hope that the above responses satisfactorily address your questions and concerns. If you have further questions, we are happy to provide additional clarification.*
null
null
null
null
null
null
Improved Online Confidence Bounds for Multinomial Logistic Bandits
Accept (poster)
Summary: This paper enhances the regret bound for multinomial logistic bandits. The multinomial logistic bandit problem is a contextual bandit setting where each item is associated with a context vector, the player can select up to K items simultaneously, and the reward is determined probabilistically by a logistic model. Specifically, the reward distribution is governed by multiple logits, each computed as the inner product between an item’s context vector and a hidden environment vector. To maximize rewards, the player must simultaneously learn this hidden vector while selecting high-reward items. A common approach involves an optimistic strategy, which first constructs a confidence set that is likely to contain the hidden vector and then acts as if the best possible vector within this set is the true environment vector. This paper introduces a refined confidence set that is tighter than existing ones, leading to improved regret guarantees. The authors analyze the regret of playing with this confidence set and demonstrate that it achieves better regret bounds with reasonable computational cost, maintaining constant per-round complexity. Claims And Evidence: All of their claims are supported by theoretical proofs. Methods And Evaluation Criteria: The proposed methods make sense. Theoretical Claims: The proofs of all the theorems appear to be correct, although I have not verified every detail. In the proof of Theorem 4.2, the observation that self-concordance holds with respect to the $\ell_\infty$ norm is particularly interesting, and the introduction of a novel intermediary term effectively completes the proof. The proof technique is non-trivial and highly insightful. Other theorems are also well-developed and thoroughly discussed. Experimental Designs Or Analyses: The experimental results are limited, as they only consider synthetic environments. However, given the theoretical nature of this work, this limitation is not a significant concern. Supplementary Material: The supplementary material is extensively dedicated to proofs. Unfortunately, I do not have sufficient time to verify the entirety of the 30-page-long proof. Relation To Broader Scientific Literature: I believe the proof technique could be applicable to other bandit settings. Essential References Not Discussed: None Other Strengths And Weaknesses: **Strengths** 1. This paper improves the regret bound for the Multinomial Logistic Bandit without increasing computational complexity. 2. The proof is not merely a combination of existing ideas but introduces novel techniques. **Weaknesses** 1. The paper would benefit from a more detailed discussion on the motivations and potential applications of the multinomial logistic bandit. 2. Including discussions on real-world scenarios and experimental evaluations could further strengthen the paper. Other Comments Or Suggestions: None Questions For Authors: 1. Could you provide more applications where the multinomial logistic bandit can be utilized? Discussing real-world use cases would help strengthen the motivation for this work. 2. Is there a known lower bound on the regret? Additionally, can you discuss whether the variance-dependent term has an optimal form? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review! We are very pleased that the reviewer appreciates the significance of our technical contributions. In particular, we’re delighted that you found our proposed $\ell_\infty$-norm self-concordant property interesting. We hope this new property will inspire further research on MNL models, extending beyond the MNL bandit setting. Please find our responses to your questions below: --- ### **Motivation and Applications** The MNL bandit model offers a powerful and flexible framework for modeling choice behavior in sequential decision-making settings, where a decision-maker presents a subset of items (an assortment) and observes user selections based on relative utilities. This structure naturally arises in a wide range of real-world applications, making the study of MNL bandits both practically relevant and theoretically important. Some key domains include: - **E-commerce and Online Retail**: Platforms choose a subset of products to display to each customer. The customer's purchase behavior is naturally modeled using discrete choice models such as MNL. - **Recommender Systems**: Services like Netflix or Spotify present a small selection of items (e.g., movies or songs), and the user selects one. Modeling this behavior with an MNL model enables the system to learn user preferences and optimize engagement. - **Online Advertising**: Ad platforms select a set of ads to display, aiming to maximize click-through rate (CTR) or conversions. The MNL model captures how user choice probabilities depend on both the ad features and the composition of the ad set. - **Dynamic Pricing and Assortment Optimization**: Retailers dynamically select assortments subject to pricing or inventory constraints. The MNL model reflects how customer choices shift in response to changes in availability and pricing. - **Potential Application to RLHF**: The MNL bandit framework also has promising applications in Reinforcement Learning from Human Feedback (RLHF). Human preferences in RLHF are often modeled using BTL or Plackett-Luce (PL) models. Since MNL generalizes BTL to multi-choice settings and serves as a building block for PL (ranking models), it provides a natural and flexible foundation for modeling richer forms of preference feedback in RLHF. We believe that the MNL bandit model provides not only a unifying theoretical framework but also a practically impactful tool for real-world sequential decision-making. We will consider expanding this motivation section in the final version. We sincerely appreciate the suggestion to highlight these broader applications. --- ### **Variance-dependent lower bound** As discussed in Remark 4.7, our proposed algorithm achieves a regret upper bound that is nearly minimax optimal in both uniform and non-uniform reward settings, when compared to the worst-case (variance-independent) lower bounds established by Lee and Oh (2024). To the best of our knowledge, a formal variance-dependent lower bound has not yet been established for MNL bandits. The most closely related result is the instance-dependent lower bound (specifically, $\kappa^\star_t$-dependent) proposed by Lee and Oh (2024); see Proposition 1 in their paper. Here, $\kappa^\star_t$ is the variance of the optimal assortment $S^\star_t$ under uniform rewards (see Appendix E.2 of our paper for further discussion), rather than the assortment $S_t$ selected by an algorithm. Thus, whether a fully variance-dependent lower bound can be derived in the MNL bandit setting remains an open question. Very recently (less than three weeks ago), He and Gu (2025) [1] established the first variance-dependent lower bound for general variance sequences in linear contextual bandits [1]. Their techniques may be adaptable to the MNL setting, and we view this as an interesting direction for future research. [1] He, Jiafan, and Quanquan Gu. "Variance-Dependent Regret Lower Bounds for Contextual Bandits." arXiv preprint arXiv:2503.12020 (2025).
Summary: This work extends the MLE-based $\text{poly}(B)$-free online confidence sequences for generalized linear models, where $B$ is the norm-bound of the parameter set; to the setting of multinomial bandits to give a statistically SOTA algorithm (in terms of the dependence on $B$ and), which is also variance-dependent. The work also provides a computationally optimal algorithm under the relatively well-known "online Newton step" framework base on ellipsoidal confidence sets. Claims And Evidence: Although the "tightest" confidence sequence is a bit "over-claimed" (without a concrete lower bound), the mathematical results themselves are very clear and correct. Methods And Evaluation Criteria: N/A for this theoretical paper. Theoretical Claims: The results for both the computationally optimal algorithm and the statistically SOTA algorithm are well-supported by theorems and lemmas. The paper is well-written and easy to follow. Experimental Designs Or Analyses: The simulation results are consistent with the theoretical claims. The proposed algorithm even outperforms Thompson-sampling-based baselines in simulation even down to constant factors, which is surprising. Supplementary Material: The supplementary material is well-written and the proofs are complete and correct. Relation To Broader Scientific Literature: The setting of multinomial bandits is well-motivated in operations research and well-established in the literature, either UCB based or posterior sampling based. The paper fits well in this line of research. The techniques on mirror descent fits in the line of classic online convex optimization, which is also a well-established area. The statistical techniques on the sharp confidence intervals for generlized linear models are recently developed, and the paper fits well in this line of research. Essential References Not Discussed: No significant oversight. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: - The online confidence sequence is claimed to be the "tightest", so could the authors provide some lower bounds on the width confidence radius itself, or at least some intuition on why couldn't we do better? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your positive evaluation of our paper and for your valuable feedback! Below is our response to your question: > *The online confidence sequence is claimed to be the "tightest", so could the authors provide some lower bounds on the width confidence radius itself, or at least some intuition on why couldn't we do better?* When we referred to our result as the "sharpest online confidence bound," we meant that, to the best of our knowledge, it is the sharpest among existing works—not that it is fundamentally optimal or cannot be further improved. In the final version, we will clarify this point to prevent any potential misunderstanding or impression of overclaiming. We sincerely appreciate your constructive feedback, and please don’t hesitate to reach out if you have any further questions.
Summary: This paper studies the multinomial logistic bandits problem, in which the learner submits an assortment of at most $K$ arms and then receives binary feedback following the MNL model. The main contribution of the paper is the proposal of an improved method for this problem, yielding an online confidence set of $O(\sqrt{d\log t}+B)$, which improves upon previous work by factors of $\log t$, $\log K$, and $B$. In addition, the paper provides a variance-dependent bound for the multinomial logistic bandit model and an MLE-based method that completely removes the dependence on $B$. Claims And Evidence: From my perspective, the paper's claims are supported by clear evidence. Methods And Evaluation Criteria: The regret bound is used as the standard metric for the logistic bandit problem. Theoretical Claims: The proofs seem sound, although I haven't verified the details exhaustively. Experimental Designs Or Analyses: From my perspective, the experimental comparison is adequate. Supplementary Material: I have briefly reviewed Appendix D. Relation To Broader Scientific Literature: This paper presents an improved confidence set for the MNL bandit problem. Its primary results improve on the work of Lee and Oh (2024) by factors of $O(\log K)$ and $O(\log T)$, and the previously multiplicative term involving $B$ is now an additive term. Essential References Not Discussed: The related work is appropriately cited and discussed; however, the technical contribution relative to Faury et al. (2022) remains somewhat ambiguous. Please refer to the strengths and weaknesses section for further details. Other Strengths And Weaknesses: **Strengths of this Paper:** - The paper offers several improvements over previous work on MNL bandits, with improved dependence on $\log T$, $\log K$, and $B$. Additionally, it introduces a variance-dependent regret bound. - Experiments are provided to demonstrate the effectiveness of the proposed methods. **Weaknesses of the Paper:** Although the improvements are interesting, my main concern pertains to the overall significance of the results and the technical contribution. - Regarding the improvement in the dependence on $B$, the warm-up technique employed here appears closely related to that of Faury et al. (2022). As discussed in Remark 4.8, the primary difference is that the previous work utilises MLE, while the current study adopts online mirror descent. However, since Faury et al. (2022) also propose an adaptive method to verify the condition on the fly, it remains unclear how challenging this extension would be for the MNL problem. - Concerning the improvement in $K$, one of the main techniques is the constant self-concordant property of the MNL loss, which was already established by Lee and Oh (2024). - The variance-dependent optimal bound is interesting; however, it is unclear how large the term $\sigma_t^2$ may be, given that it depends on $S_t$, the set of arms selected by the learner. This dependence is somewhat unfavourable, as previous variance-dependent bounds in linear bandits are typically independent of the learner’s predictions [1,2]. [1] Improved Variance-Aware Confidence Sets for Linear Bandits and Linear Mixture MDP. NeurIPS 2021. [2] Improved Regret Analysis for Variance-Adaptive Linear Bandits and Horizon-Free Linear Mixture MDPs. NeurIPS 2022. Other Comments Or Suggestions: Please see the comments above. Questions For Authors: - Could you further emphasize the challenges of extending the warm-up technique from Faury et al. (2022) from an offline to an online setting? - Could you provide more discussion on the significance of the variance-aware bound in cases where $\sigma_t$ depends on $x_t$? - Although Theorem 4.5 successfully removes ${B}$ from the leading term, the dependence on the remaining term appears large compared to Lee et al. (2024). Could you elaborate on the tightness of the dependence on $B$? I would be happy to increase my score if the above concerns are adequately addressed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and provide valuable feedback. We would like to clarify that our main contribution is the development of the **first $B,K$-free, variance-dependent optimal** regret bound for MNL bandits, which cannot be achieved by existing algorithms such as those proposed by Faury et al. (2022) or Lee and Oh (2024). We provide detailed explanations below and sincerely hope this clarification helps convey the novelty of our approach. --- ### **Comparison to Faury et al. (2022)** While our algorithm shares some conceptual similarities with Faury et al. (2022)—notably the adaptive warm-up strategy—it fundamentally differs from their method in several key aspects beyond just the use of online updates: - **Decision rule**: The decision rule used in the adaptive algorithm (ada-OFU-ECOLog) by Faury et al. (2022) does **NOT** yield a $B$-free confidence bound. Their confidence bound, $\eta_t(\delta)$ (defined in Equation (15) of their paper), has an order of $\mathcal{O}(B d \log t)$. This $B$-dependence arises primarily because their method does not guarantee that the diameter of the search space, $\text{diam}_{\mathcal{A}}(\Theta_t)$, remains small or bounded by a constant over time. This is notably distinct from their non-adaptive algorithm (OFU-ECOLog, fixed-arm setting), which achieves a tighter confidence bound $\sigma_t(\delta)$ of order $\mathcal{O}(d \log t + B)$. We believe the reviewer may have inadvertently confused the confidence bound of ada-OFU-ECOLog with that of the non-adaptive version (OFU-ECOLog). In contrast, our proposed decision rule **always** guarantees a confidence bound that is completely **independent of $B$**. Combined with our new decision rule and the novel online confidence bound established in Theorem 4.2—which incorporates several technical novelties such as $\ell_\infty$-norm self-concordance, bounded intermediate terms, and a $d$-free regularization parameter—we successfully achieve a $B$-free regret bound. - **Prior knowledge of $\kappa$:** Faury et al. (2022) **require prior knowledge** of $\kappa$, which is often impractical or even impossible to know beforehand in real-world settings. In contrast, our algorithm does not rely on any prior knowledge of $\kappa$, making it significantly more practical. - **Gram matrix update:** In Faury et al. (2022), the Gramm matrix $V^{\mathcal{H}}\_t$ is updated **overly conservatively** using a weight parameter $\kappa$ as $V^{\mathcal{H}}\_t \leftarrow \sum_{ a\in \mathcal{H}\_{t+1}} \kappa a a^\top + \gamma_t(\delta)I_d$. Since $\kappa$ is very small—specifically, $\kappa = \mathcal{O}(e^{-3B})$ in logistic bandits, and even smaller in MNL bandits with $\kappa = \mathcal{O}(K^{-2} e^{-3B})$—the updates become too conservative. As a result, their search space $\Theta_t$ shrinks too slowly. In contrast, our algorithm does **NOT** apply such conservative updates to the warm-up Hessian matrix $H^w_{t+1}$. Instead, we update it using the second derivative of the loss, which enables the search space $\mathcal{W}^w_t(\delta)$ to shrink more rapidly and tightly. --- ### **$\ell_\infty$-norm self-concordance** Our self-concordant property is constructed with respect to the $\ell_\infty$-norm, whereas Lee and Oh (2024) use the $\ell_2$-norm. This distinction is crucial: naively applying an $\ell_2$-norm-based self-concordant property leads to a step size $\eta$ that depends on $K$ and $B$, which in turn causes the confidence bound to also depend on these parameters. We believe this novel $\ell_\infty$-norm self-concordant property is of independent theoretical interest and has potential applicability beyond bandits, extending more broadly into the literature on MNL models. --- ### **Variance-dependent bound** It appears the reviewer may be confusing the definition of variance used in the cited papers [1,2]. To clarify, similar to our setting—where variance depends on the selected assortment $S_t$—**the variance in [1,2] also explicitly depends on the chosen arm $x_t$**. Moreover, the regret bounds in both our work and [1,2] are **not** dependent on the learner’s predictions—they depend solely on the variance under the **true** parameter. --- ### **Tightness of $B$ in non-leading term** Since Theorem 4.12 (MLE version) achieves a regret bound with a non-leading term that is free of any polynomial dependence on $B$, we believe that the $B$-dependence in the non-leading term of Theorem 4.5 is likely not optimal. Eliminating this dependency is an interesting direction for future work. We appreciate the insightful question that brought attention to this issue. --- > We sincerely believe that we have addressed all of your concerns. If you agree, we would greatly appreciate it if you could kindly reconsider your evaluation. Additionally, please feel free to reach out if you have any further questions or require clarification on any point. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed feedback. I believe that the comparison and distinction between this work and that of Faury et al. (2022) deserve a more thorough and careful discussion in the paper. In addition, I remain somewhat uncertain regarding the significance of the variance-aware bound. Since the model depends on $S_t$, it is not entirely clear to me how substantial this improvement is. It would be helpful if the authors could provide concrete examples illustrating situations in which the variance-aware bound proves particularly beneficial. For instance, in the linear bandit setting, variance-aware bound may lead to improved guarantees for linear function approximation in RL. Moreover, in the case of linear bandits, as presented in Section 3 of [1], it seems that the variance $\sigma_k$ could be independent of the selected action $x_t$, given that the random noise $\epsilon_k$ may itself be independent of $x_t$. --- Reply to Comment 1.1.1: Comment: We truly appreciate your reply and the chance to provide more clarification. We would like to reiterate that our result is **strictly more general**—since the logistic model is a special case of the MNL model. And, our result is also **stronger** (sharper), as we provide a **tighter regret bound in terms of $B$** than Faury et al. (2022). Furthermore, when reduced to their logistic setting (i.e., the special case of $K=1$ and uniform rewards in ours), we can derive an instance-dependent bound (see Proposition 4.10), consistent with their result. As suggested by the reviewer, we are more than happy to provide a more detailed comparison with Faury et al. (2022) in the final version. We appreciate the suggestion—it will certainly strengthen our paper! Next, we address the reviewer's final question regarding the variance-dependent bound. As clearly stated in the "Discussion of Theorem 4.5", our result of $\mathcal{O}\left(d \log T \sqrt{\sum_{t=1}^T \sigma_t^2}\right)$ is a **strict improvement** over the previous best bound of $\mathcal{O}\left(B^{3/2} d \log K (\log T)^{3/2} \sqrt{T}\right)$ established by Lee & Oh (2024). This improvement follows from the fact that, by definition, $\sigma_t^2 \leq 1$. We also note that, to the best of our knowledge, there was NO prior variance-dependent or instance-dependent regret bound for MNL bandits under non-uniform rewards ($r_{ti} \in [0,1]$), which is a strictly more general setting than the uniform rewards case ($r_{ti}=1$). Regarding the dependence of $\sigma_t$ on $x_t$ in [1], we would like to clarify that **$\sigma_t$ is indeed defined as a function of $x_t$.** Recall the definition of $\sigma_t^2$: $\sigma_t^2 := \mathbb{E}[\epsilon^2_t |\mathcal{F}\_t] = \mathbb{E}[ (r_t - x_t^\top \theta^\star)^2 |x_1, \epsilon_1, \dots, x\_{t-1}, \epsilon\_{t-1}, x_t]$. For simplicity, suppose that $\epsilon_t$ is independent of the history, given $x_t$. Then we can write: $\sigma_t^2 := \mathbb{E}[\epsilon^2_t | x_t]$, which shows that $\sigma_t^2$ is the *conditional variance* given $x_t$. Hence, **by definition, $\sigma_t^2$ depends on $x_t$**. Note that [1] does NOT state or assume that $\epsilon_t$ is independent of $x_t$. In fact, in their RL setting, the variance of the value function does depend on the state. Moreover, the form of our (conditional) variance is identical to that in [1]. Specifically, our variance is defined as: $\sigma_t^2 := \mathbb{E}[(r_{ti} - \mathbb{E}[r_{tj}|S_t])^2 | S_t]$, where the expectation is with respect to the MNL choice distribution given $i \in S_t$. We hope this clarifies the reviewer’s concern regarding the significance of our variance-dependent bound. That said, we would like to emphasize that, while the variance-dependent result is certainly interesting in its own right, our main theoretical contribution lies in the development of the first **$B,K$-free online confidence bound** and the sharp **regret bound** derived from it. Even in the absence of the variance-dependent result, we believe our findings offer substantial theoretical contributions and merit recognition. In the light of clarification, we kindly and respectfully ask the reviewer to take these key contributions into account and to consider adjusting the score accordingly.
Summary: The main contribution of this paper is proposing an improved online confidence bound for multinomial logistic models. Moreover, the authors applied their results to MNL bandits to achieve an enhanced result. Further, they also showed numerical experiments. Claims And Evidence: I verified the correctness of some theorems. Moreover, numerical experiments make sense to me. Methods And Evaluation Criteria: The evaluation criteria are natural. Theoretical Claims: I verified the correctness the theoretical results in Appendix C. Experimental Designs Or Analyses: The experimental results make sense to me. Supplementary Material: I verified the correctness of some theoretical results in Appendix C. Also, I skimed all other parts. Relation To Broader Scientific Literature: Bandits are a fundamental topic in learning theory. Any good result in this context will be valuable. This paper provides a new tool, namely a confidence bound for multinomial logistic and applies it to MNL bandits to achieve improved results. This is a solid theoretical paper. I think it is a good contribution to the field of statistical learning theory. Essential References Not Discussed: - Other Strengths And Weaknesses: I like the writing of the paper. In particular, I like proof sketches and intuitions. Other Comments Or Suggestions: - Questions For Authors: I do not have any specific question. Ethical Review Concerns: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your positive support and recognition of the value of our work! We truly hope that this research helps to illuminate a new direction for MNL models and bandit algorithms. Please feel free to reach out if you have any further questions.
null
null
null
null
null
null
Learning-Augmented Hierarchical Clustering
Accept (poster)
Summary: Hierarchical clustering, wherein vertices (representing items in a dataset) are grouped into clusters of increasing refinement following a tree structure, is a well-motivated procedure of interest to practitioners and gives rise to interesting theoretical problems. In particular, several of the most prominent objectives in hierarchical clustering admit impossibility results. In this work, the authors show that having even only marginally informative predictions allows one to overcome these impossibility results. They consider a “splitting oracle,” which given an input triple of three vertices outputs a single vertex. With probability $p$, this output vertex is the first vertex that would be separated from the other two in some (near-)optimal hierarchical clustering tree. With probability $1-p$, the oracle outputs an arbitrary vertex. For small constant values of $p$, e.g. $p= ½ + \varepsilon$ for $\varepsilon > 0$ fixed, this oracle gives only a small amount of additional information to the algorithm. Yet the authors show that this is enough to drastically reduce the hardness of the problem. Claims And Evidence: Claim: Learning-augmented algorithms can efficiently approximate optimal HC trees for prominent HC objectives, breaking impossibility barriers. Evidence: - In Thm 1, they show that for $p=9/10$, there exists a polynomial-time algorithm that makes $O(n^3)$ queries and produces a constant approximation to the Dasgupta (Das) optimal HC tree. - In Thm 2 they improve the running time, showing that an $\tilde{O}(n^3)$ time algorithm that makes $O(n^3)$ queries produces an $O(\sqrt{\log\log(n)}$ approximation to the Das optimal HC tree. - In Thm 3 they show an $\tilde{O}(n^2)$ time algorithm that makes $O(n^2)$ queries produces a constant approximation to the Moseley-Wang (MW) optimal HC tree. Methods And Evaluation Criteria: Not applicable; this is a theoretical work and does not include experiments. Theoretical Claims: I did not check any of the proofs in the supplementary material. Experimental Designs Or Analyses: Not applicable; this is a theoretical work and does not include experiments. Supplementary Material: I read Appendix I.3 to clarify my understanding of the oracle success probabilities (see Questions and Comments/Suggestions). Relation To Broader Scientific Literature: Learning-augmented algorithms/algorithms with predictions have been an area of expanding recent study. Several works in this area have focused on showing that even marginally-informative predictions (i.e. those whose probability of giving a correct response is arbitrarily close to ½) can break hardness barriers for classical well-studied problems, such as max-cut, independent set. Cohen-Addad, Vincent, et al. "Max-Cut with $\epsilon $-Accurate Predictions." arXiv preprint arXiv:2402.18263 (2024). Braverman, Vladimir, et al. "Learning-augmented maximum independent set." arXiv preprint arXiv:2407.11364 (2024). This work similarly studies how predictions can break impossibility results for hierarchical clustering. Essential References Not Discussed: I do not know of any essential references that are not discussed. Other Strengths And Weaknesses: Strengths: - This work clearly establishes that predictions fundamentally decrease the hardness of well-motivated hierarchical clustering problems. - Several of the algorithms proposed (esp. Algo. 2 and 3) seem like they could be implemented in practice. It would be interesting to compare the performance of these theoretically-grounded algorithms with heuristics used for HC in practice. - The partial HC trees introduced could be objects of further study. They seem well-motivated for practitioners, who may be content to learn a hierarchical clustering that encompasses only the majority of vertices, and leaves some reasonably-sized sub-population ambiguous. I find no major weaknesses in the work, but I did not check the proofs included in the supplementary material. I do think the assumption of large probability of success (p=9/10) was surprising to me, as in other works (e.g. max-cut and max IS with predictions) the emphasis is on overcoming barriers using arbitrarily small probability of success. I would suggest that an abridged discussion of this assumption be presented closer to the statements of the main theorems, as currently it could be overlooked in the main body of the paper. Other Comments Or Suggestions: The assumption of large success probability p=9/10 and highlights from the Discussion of general success probabilities (Appendix I.1) should be emphasized earlier in the text. When I first read Thms. 1-3, I assumed based on the exposition that they held for any arbitrarily small yet constant probability of success (e.g. ½ + \varepsilon). After reading Appendix I.1, it is now my understanding that for arbitrary (adversarial) incorrect responses, a significantly large probability of success is necessary (see Questions below), while for randomly-chosen incorrect responses probability $½ + \varepsilon$ does suffice. Is this understanding correct? In the exposition, $\delta$ is used to denote success probability, while later $p$ is used. I would suggest picking one of the two and using it consistently, as there's already several symbols floating around that describe success probabilities (e.g. $\varepsilon$ and $C$). If I am confused and actually $\delta$ and $p$ have different meanings, please correct me. Questions For Authors: I already feel the paper is compelling. These questions are mostly for my own understanding. 1. Is there an intuitive reason why the results established in this work only apply to the Mosely-Wang (MW) and Dasgupta (Das) objectives, as opposed to the Cohen-Adad (CA) objective? In particular, it is striking that the impossibility results for oblivious algorithms for both MW and Das stem from the Small Set Expansion hypothesis, whereas impossibility results for the CAobjective stem from UCG. Is the fact that your results focus on MW and Das implicitly connected to this divide? 2. In Appendix I.1, I am confused about some of the remarks about general success probabilities. The penultimate line states “...we remark our algorithm would work for any success probability ½ + C for sufficiently large $C=\Omega(1)$.” I am confused by the meaning of $C=\Omega(1)$ when discussing success probabilities, as this does not seem to imply any nontrivial lower bound on C. Do the authors have a more specific lower bound on ½ + C in mind? The main results of the paper imply that ½ + C = 9/10 suffices, while the counter-example in Appendix I.1 implies that ½ + C must be at least ¾. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your careful review, positive evaluation, and helpful questions. Our responses and clarifications are as follows. > Relation To Broader Scientific Literature Thanks for pointing out the additional papers related to our work. We will add them and some discussions about the connections. > I do think the assumption of large probability of success (p=9/10) was surprising to me, as in other works (e.g. max-cut and max IS with predictions) the emphasis is on overcoming barriers using arbitrarily small probability of success. I would suggest that an abridged discussion of this assumption be presented closer to the statements of the main theorems, as currently it could be overlooked in the main body of the paper. > After reading Appendix I.1, it is now my understanding that for arbitrary (adversarial) incorrect responses, a significantly large probability of success is necessary (see Questions below), while for randomly-chosen incorrect responses, probability $½+\varepsilon$ does suffice. Is this understanding correct? > (Q2) In Appendix I.1, I am confused about some of the remarks about general success probabilities. The penultimate line states “...we remark our algorithm would work for any success probability ½ + C for sufficiently large C. We answer these concerns and questions together here since all of them are related to the success probability of the learning-augmented oracle. We first want to say that the reviewer’s understanding of our algorithm is correct: in the setting of adversarial noise, the success probability should be at least $1/2+C\geq 3/4$, and $9/10$ suffices. On the other hand, if the errors of the oracle are random, we should be able to ensure correctness with $1/2+\varepsilon$ for any constant $\varepsilon>0$ (we forgot to mention “constant” in the current Appendix I.1 and will add that in the final version). The reason our algorithm should work with a sufficiently high success probability is exactly due to the hierarchical structure of the problem. In both max-cut and MIS papers (which we are sufficiently familiar with), the errors in the algorithm are ‘one-shot’. However, in the HC problem, if the construction of the partial tree is wrong at any level, the error will propagate to all subsequent nodes, and it is not clear how one could control the error if this happens. Furthermore, the oracles in the max-cut and MIS papers are about ‘memberships’, while the oracle in our setting is about ‘relationships’. The latter type of oracle does not give *any* trivial solution or even a ‘partial solution to be fixed’ (like the case in the MIS paper). Therefore, a constant success probability much larger than $1/2$ is necessary. > In the exposition, $\delta$ is used to denote success probability, while later $p$ is used. Thanks for the catch! We will unify this and use $p$ as the success probability. > Is there an intuitive reason why the results established in this work only apply to the Mosely-Wang (MW) and Dasgupta (Das) objectives, as opposed to the Cohen-Adad (CA) objective? In particular, it is striking that the impossibility results for oblivious algorithms for both MW and Das stem from the Small Set Expansion hypothesis, whereas impossibility results for the CAobjective stem from UCG. Is the fact that your results focus on MW and Das implicitly connected to this divide? We do not believe our results are connected to the division between the UGC vs. small set expansion. The reason is that our constructions for the weakly and strongly consistent partial trees are objective-oblivious, and we can also construct these partial trees w.r.t. the Cohen-Addad objective. The reason for us not getting results for the Cohen-Addad objective is due to the limited bandwidth, we are not sure how the error would propagate in the CA objective. It will be an interesting direction to explore as a future step. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions--I will maintain my score. I would like to see this paper at ICML.
Summary: The paper studies learning-augmented algorithms for hierarchical clustering (HC). In this problem, a set of data points is given along with a similarity measure, which induces a weighted undirected graph $G=(V,E,w)$. The goal is to partition the points/vertices into a binary tree that captures the hierarchical relationships within the data. The paper considers two objective functions: the Dasgupta cost minimization objective (Dasgupta, 2016) and the Moseley-Wang revenue maximization objective (Moseley & Wang, 2017). For Dasgupta’s objective, prior work (Charikar & Chatziafratis, 2017a; Roy & Pokutta, 2017) showed that there is no polynomial-time algorithm that could achieve any constant-factor approximation under the Small Set Expansion hypothesis. The best-known algorithm achieves an $O(\sqrt{\log n})$-approximation (Charikar & Chatziafratis, 2017b; Cohen-Addad et al., 2019). For the Moseley-Wang objective, (Chatziafratis et al., 2020b) showed that achieving a $(1 − C)$-approximation is impossible under the Small Set Expansion hypothesis for some fixed constant $C \in (0, 1)$. Motivated by the practical aspect of the problem, the paper considers HC with learning-augmented oracles. Specifically, it introduces a splitting oracle that, when queried for a triplet of vertices, outputs the vertex that is first separated away from the other two with respect to an optimal HC tree. The oracle returns the correct answer with probability $p$ and returns arbitrarily with the remaining probability. Using such a predictor, for Dasgupta’s objective, the paper achieves a constant-factor approximation in polynomial time (albeit with a large exponent), and a more practical $O(\sqrt{\log \log n})$-approximation that improves upon the state-of-the-art. For the Moseley-Wang objective, the paper achieves any constant factor approximation in polynomial time, which overcomes prior impossibility barriers. The key technical contribution of the paper is the introduction of partial HC trees, which capture structural properties of optimal HC trees. The paper shows that these partial HC trees can be efficiently constructed given access to the splitting oracle, independent of the specific objective function. The paper further explores sublinear algorithms for HC with splitting oracles (in the streaming and PRAM model), and obtains results that outperform their non-learning counterparts. Claims And Evidence: This paper is purely theoretical. All claims are formally stated as theorems or lemmas and are supported by rigorous proofs. Methods And Evaluation Criteria: This paper is purely theoretical, and the proposed methods are analyzed through rigorous mathematical proofs. The evaluation is based on formal theoretical criteria, which are appropriate for the problem at hand. Theoretical Claims: I have checked all the content in the main text (i.e., the first 8 pages). However, I did not thoroughly verify all the proofs in the appendix. I reviewed the technical overview and sketched the proofs, and they appear to be correct. Experimental Designs Or Analyses: NA Supplementary Material: I have reviewed Appendix A and B in detail and sketched the remaining parts. The supplementary material is clearly structured and easy to follow. The proofs appear to be correct. Relation To Broader Scientific Literature: The paper studies learning-augmented algorithms for hierarchical clustering. Hierarchical clustering is a well-studied problem. There are two main objective functions considered in prior work: the Dasgupta cost minimization objective (Dasgupta, 2016) and the Moseley-Wang revenue maximization objective (Moseley & Wang, 2017). The paper provides improved algorithms with respect to these two objectives in the learning-augmented framework, showing the power of machine learning oracles. Therefore, the paper is a good addition to the literature on learning-augmented algorithms. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: - Hierarchical clustering is a fundamental and extensively studied problem. The paper is well-motivated to study this problem in the learning-augmented setting with recent advances in machine learning. The paper is a good addition to the literature on learning-augmented algorithms. - The technical contributions are solid. The theoretical results outperform prior state-of-the-art and even impossibility barriers. - The paper is extremely well-written. The technical sections are clearly structured and easy to follow. Weaknesses: - the error probability of the splitting oracle cannot be too large, i.e., the success probability of the oracle is assumed to be at least 1/2+C, for sufficiently large $C=\Omega(1)$. - Since the paper is motivated by the recent advances in machine learning, I think some empirical results for the proposed algorithms would strengthen the evaluation. Other Comments Or Suggestions: Typos: - Page 1, Line 49, right column: $1+C$ -> $1-C$ - Page 3, Line 149, right column: deonte -> denote - Page 4, Line 168, left column: I think $level_\mathcal{T}(\cdot)$ is not formally defined. - Page 4, Definition 5: Should $\mathcal{T}$ be $\mathcal{T}^*$? - Page 5, Definition 8: ‘the same set of vertex’ -> ‘the same set of vertices’; also applies to Definition 9. - Page 7, Line 356, left column: I think a period is missing. Questions For Authors: Q1: I am curious about the practical aspects of the splitting oracle in the streaming setting. While the paper assumes that the oracle is given offline and queried as a black box, is there any possibility that the oracle can be implemented using small space? Q2: Do you see any chance for your algorithms to be made sublinear time? Q3: In line 2813, you mentioned that for *random* incorrect answers, you will be able to work with $1/2+\epsilon$ success probability for any $\epsilon>0$. Could you give more details on that? Specifically, how does the resulting approximation ratio depend on $\epsilon$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful questions and positive evaluation. Our responses to the questions are as follows. > The error probability of the splitting oracle cannot be too large We agree with the reviewer that this is a limitation of our algorithm, and due to the combinatorial nature of many subroutines, it’s unclear exactly what minimal error can be tolerated on this front. However, we hope that modern ML models can achieve decent accuracy across the entirety of the similarity graph, which may consist of a number of dissimilar items. We agree that additional experiments could add value to the paper. We have conducted preliminary experiments on social network-like graphs and plan to expand the discussion accordingly. > Typos Thanks for pointing these out. We will make changes accordingly. > Implementing the oracle in small space This is a great question. We are not exactly sure how the oracle could be implemented in small space. However, since the input of the oracle is only triplets of vertices, and the output is essentially an ordering, the model size could be very small (say, for a 3-layer neural network, we could implement with 3xNx1 with one N-node hidden layer). Of course, to ensure good performances, the actual model might need to be bigger. We could also use cloud-based services, and the aim is to reduce the memory cost on local machines. For instance, we could query the triplets to LLMs and implement the streaming algorithm on our local machine. Exploring whether these ideas work is an interesting direction to pursue. > Potential sublinear time algorithms We believe it is possible to revise our weakly consistent partial tree algorithm to achieve *sublinear* (i.e., $o(n^2)$) time. The `counterpart testing’ subroutine, as we could see in the construction of the strongly consistent partial tree, could be made $O(n \log{n})$ time. Due to the balanced partition, this part should take $\tilde{O}(n)$ time across the algorithm. However, the ‘root test’, which is crucial to the correctness of our algorithm, still requires $O(n^2)$ time. Making this part sublinear time appears to require more work, and is an interesting direction to explore as future research. > Performance of the algorithm with random noise Due to the persistent noise and the combinatorial nature of our algorithm, it is difficult to get something in the form of query-quality trade-offs. If the noises are random, for the adversarial example we mentioned in appendix I.1, as long as $\varepsilon=\Omega(1/n)$, we should be able to obtain enough signals to distinguish the cases for the two sides. We also realize that the way it is currently written is not precise, and we meant to say that if the noise is random, we could guarantee correctness for any *constant* $\varepsilon$ (as opposed to some fixed $C\geq 1/4$). We will make changes about this in future versions.
Summary: This paper explores hierarchical clustering in a learning-augmented framework. Unlike other clustering methods such as $k$-means or $k$-medians, hierarchical clustering constructs a clustering tree $\mathcal{T}$ to represent similarity across all item pairs, and does not have a predetermined target number of clusters. The quality of $\mathcal{T}$ can be assessed using various metrics; this work focuses on the Dasgupta and Moseley-Wang objectives. The study assumes access to a splitting oracle that, given any triplet of items $(u,v,w)$, predicts which one should split away first in the optimal clustering tree $\mathcal{T}^*$. The oracle’s predictions are assumed to be correct with probability $9/10$ and arbitrary otherwise. The paper investigates how leveraging these predictions enables the design of efficient polynomial-time approximation algorithms. For the Dasgupta objective, the best-known polynomial-time algorithm achieves an $O(\sqrt{\log n})$-approximation, and no polynomial-time algorithm can guarantee a constant approximation. The authors overcome these limitations by utilizing the splitting oracle. They show that with $O(n^3)$ oracle queries, it is possible to achieve with high probability the following - A constant-factor approximation in $O(n^{50002})$ time, instead of exponential time in the setting without predictions. While this result is not practical because of the extremely high polynomial degree, it is interesting from a theoretical point of view as it shows that using an oracle allows breaking classical impossibility results. - An $O(\sqrt{\log \log n})$-approximation in $O(n^4)$ time. For the Moseley-Wang objective, the authors propose a $(1-o(1))$-approximation algorithm that requires $O(n^2)$ oracle queries and runs in $O(n^2 \text{poly} \log n)$ time. This surpasses the impossibility result of Chatziafratis et al. (2020) for the setting without predictions. ## Update after rebuttal My primary concern with this paper is in the assumption that the oracle is correct with a fixed, known probability of 0.9. This assumption weakens the results, as it is unclear how the algorithm's performance scales with varying oracle accuracy. Moreover, the assumption that the success probability is known contradicts the core motivation behind learning-augmented algorithms, which is to design algorithms having a good performance without any knowledge of the oracle accuracy. In their rebuttal, the authors state that their algorithms still work when the oracle is correct with probability $1/2 + \epsilon$, and gives uniformly random predictions with the remaining probability. They refer to this as "the standard model," but this is not correct. The standard model in algorithms with $\epsilon$-accurate predictions assumes that the oracle is correct with probability $\epsilon$ and adversarial otherwise, which is the model studied in the paper, and which requires $p=0.9$. The authors also claim that the algorithm does not require precise knowledge of $p$, but only that $p \geq 1/2 + \Omega(1)$ for some sufficiently large $\Omega(1)$. However, without specifying the constant hidden in the $\Omega(1)$, this claim is trivial and uninformative, as the algorithm already works for $p \geq 0.9$. Thus it does not address my concern. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: All the proofs are in the appendices. I skimmed through some proofs to grasp the main ideas and overall logic but did not examine any in detail. Experimental Designs Or Analyses: NA Supplementary Material: I only read briefly some proofs. Relation To Broader Scientific Literature: The paper studies the problem of hierarchical clustering augmented with a splitting oracle. To my knowledge, this is the first paper studying this problem. Therefore, there are no prior works proving results in the same setting. The authors show that using the oracle allows significant improvement upon previous algorithms and results for the problem of hierarchical clustering, and even break impossibility results. The improvements are cited in my summary of the paper. Essential References Not Discussed: The paper gives a very good overview of state-of-the-art results of hierarchical clustering without predictions, which allows a good understanding of the proposed results and their relevance. The paper deviates from the standard setting of learning-augmented algorithms, which assumes no guarantees on the quality of predictions. Instead, it considers a specific scenario where predictions are independently accurate with a certain probability, which is referred to in the literature as "algorithms with $\epsilon$-accurate predictions". While some prior works on similar settings (for other problems) are cited, there is no discussion on the model. Maybe this should be made more explicit in the introduction. A very interesting aspect of the paper is that the algorithm actively decides when to query predictions, rather than receiving them as a fixed input as the standard setting in learning-augmented algorithms, and an objective is to have a small number of queries. The authors should emphasize this feature more and consider citing relevant works that explore similar settings. Such works include for example - "Parsimonious Learning-Augmented Caching", Im et al. 2022 - "Algorithms for Caching and MTS with reduced number of predictions", Sadek et al. 2024 - "Learning-Augmented Priority Queues", Benomar et al. 2024 Other Strengths And Weaknesses: **Strengths**. - The setting is very interesting, and the oracle model makes sense - The authors consider different objective functions and improve upon standard results for all of them - The problem is technically challenging, yet the paper provides many strong results. Weaknesses: - Assuming an oracle with accuracy $p = 0.9$ is quite restrictive. The authors mention in the appendix that their results can be generalized for $p = 1 + \epsilon$ with $\epsilon$ above some unknown constant—though this is trivial, as the results already hold for $p = 0.9$. However, I find their arguments unconvincing. Understanding the dependency on $p$ is crucial for quantifying the impact of oracle accuracy. Additionally, for $p = 0.5$, the algorithm should be able to perform as good as standard hierarchical clustering methods without predictions (e.g., for the Dasgupta objective). This suggests the possibility of designing an algorithm that interpolates between the performance of the best algorithm without predictions, when $p$ is close to 1/2, and the optimal algorithm when the oracle is perfectly accurate. Moreover, the choice of $p=0.9$ is extremely arbitrary if the precise value $0.9$ is used nowhere in the proofs. - The paper is difficult to read: Some sections/paragraphs are extremely wordy (Section B.1 for example), notations are used before being introduced, some paragraphs are very poorly written (for eg. L 403-412 (left column)), some sentences are very informal (L 813: 'this is a big “suppose” as we will see later'), ... Other Comments Or Suggestions: The accuracy of the oracle is denoted $1-\delta$ in Lign 72 (right column), while it is denoted $p$ in the rest of the paper Questions For Authors: 1/ In Section 5, the complexity of Algorithm 3 seems quadratic and not near linear: the algorithm uses the construction of Theorem 11, which requires O(n^2) queries and $\Omega(n^2)$ time, then another sublinear operation. The overall complexity is near-quadratic, and not near-linear as the section title suggests. Can the authors clarify this? 2/ Can the authors give a reference or proof for the concentration bound of Proposition 13? The bound they state is known to be true for Bernoulli random variables, i.e. taking values in the set {0,1}, and not for any random variables bounded in $[0,1]$ a.s. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your detailed review and insightful questions. Our responses and clarification are as below. > The paper deviates from the standard setting of learning-augmented algorithms, which assumes no guarantees on the quality of predictions. Although there is a body of literature that assumes no guarantees on the quality of predictions, we believe the setting with performance guarantees for the predictions is also quite standard, especially for graph-related problems. To that end, we have expanded the discussion on the “algorithms with $\epsilon$-accurate predictions”, including “Learning-Augmented Streaming Algorithms for Approximating Max-Cut” (DVP [ITCS’25]), “Learning-Augmented Approximation Algorithms for Maximum Cut and Related Problems” (COGLP [NeurIPS’24]), and “Learning-augmented Maximum Independent Set” (DBSW [APPROX’24]). We want to emphasize that since we study an *offline* problem (as opposed to an online one), we could always guarantee *robustness* studied in the online learning-augmented algorithm literature by running a separate algorithm without prediction. In the end, we could evaluate the costs of the two HC trees (with and without predictions) and output the one with the lower cost. > Discussions about “algorithm actively decides when to query predictions, rather than receiving them as a fixed input” We agree that our algorithm requires adaptivity, and we can add more discussion about this in future versions. Thanks for providing the related literature. > Assuming an oracle with p=0.9 accuracy is quite restrictive. First, we want to clarify that our discussion did not mention the error probability of $1+\varepsilon$ but rather $1/2+\varepsilon$ and $1/2+\Omega(1)$. Due to the offline nature of our problem, indeed, we could recover the best result for hierarchical clustering when the prediction is bad by simply running a separate algorithm without using any prediction. We also discussed a simple example (in I.1) that requires the correct probability to be at least 3/4. This is due to the combinatorial subroutines used in our algorithm > Accessibility. We thank the reviewer for the comments on readability. We have strived to make the paper readable by inserting multiple figures and explaining things in detail as much as possible, e.g., by providing intuition for both our algorithm and our analysis in the technical overview that is Appendix B. Although elements of our results have multiple moving parts and are complex to describe, we completely agree that accessibility is an important part of the paper. Thus, we will continue to make editorial passes over the document to improve presentation. > The accuracy of the oracle is denoted $1-\delta$ vs. $p$ We thank the reviewer for pointing this out, and we have made changes accordingly. > In Section 5, the complexity of Algorithm 3 seems quadratic and not near linear. We view the input as the similarity graph with $\Omega(n^2)$ edges on $n$ vertices, so that the input size is $\Omega(n^2)$ and thus our algorithm that has runtime roughly quadratic in the number of vertices actually has runtime near-linear in the input size. We have clarified this point. > The bound in Proposition 13. We only deal with *discrete* random variables in this paper, and for all the applications of Proposition 13, we only used supports on $\{0,1\}$. For discrete random variables with supports as a (finite) set of value between $[0,1]$, the bound should be true also. We agree that Chernoff bounds on *continuous* random variables might be different, and we will make Proposition 13 more precise in later versions. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. **$\epsilon$-accurate predictions.** The setting of $\epsilon$-accurate predictions is indeed quite standard, and I completely agree that studying this model makes sense. My point, however, was that it does not align with the classical learning-augmented framework. A brief discussion highlighting this distinction would be beneficial. **Robustness.** I agree that the algorithm can be easily adapted to ensure robustness and had no concerns on this point. **Probability $p=0.9$ of accuracy.** The $1+\epsilon$ in my review was a typo; thanks for noticing. I remain unconvinced by the authors’ response regarding the choice of $p=0.9$. This value seems both very high and arbitrary. Why was $p=0.9$ chosen specifically over, say, $p=0.8$? The example in Appendix I.1 demonstrates that the proposed algorithm requires a success probability of at least 3/4, but this does not imply that no improvements are possible for $p = 1/2 + \epsilon$ for arbitrary $\epsilon > 0$. Rather, it suggests that better algorithms are needed to potentially perform well under weaker guarantees on $p$. Fixing the value of $p$ obscures important limitations of the algorithm. If the analysis only holds for $p\geq 1/2 + C$ for some known $C>0$, then it is crucial to see how the behaviour degrades as $p$ approaches the critical value $1/2 + C$, as this would help understanding the limitations of the algorithm and also help improve it in future work. Finally, since $p$ is fixed, the algorithm's design implicitly assumes prior knowledge of its exact value.. This is a limitation from the point of view of designing algorithms with $\epsilon$-accurate predictions. Prior work in this line of research, including those cited in the paper and the authors’ rebuttal, assumes that the accuracy of the ML model is either unknown or known only within a certain range rather than being precisely determined. Fixing $p$ in the paper also hides this important aspect. The assumption $p = 0.9$ is the major limitation of the paper in my opinion, as outlined above and in my review. I remain unconvinced by the authors' rebuttal regarding the justification for fixing the value of $p$. For these reasons, I maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for the continued correspondence in this discussion. > I remain unconvinced by the authors’ response regarding the choice of $p=0.9$. This value seems both very high and arbitrary. Why was $p=0.9$ chosen specifically over, say, $p=0.8$? We remark that our algorithm can actually handle success probability $\frac{1}{2}+\varepsilon$ for $\varepsilon=\Omega(1)$ in the standard model. However, in our more nuanced model, we can only tolerate a sufficiently large error probability. Specifically: - The main challenge is dealing with a sufficiently large success probability along with the presence of *adversarial* inputs when the answer is incorrect. In our model, when the oracle errs with probability $1-p$, we assume the answer is the worst-case scenario, potentially even adaptive, designed to maximally obstruct the algorithm's success. We remark that other works studying adversarial input upon prediction failures often require additional assumptions to deal with these cases, e.g., literature in learning-augmented $k$-means clustering generally that adversarial failures must actually be distributed in a "uniform" way. We do not make such an assumption here. - Mathematically, our analysis requires on combinatorial properties that relate the ratio of functions of $p$ and $1-p$, so we cannot achieve $p$ arbitrarily close to $0.5$. - If the incorrect answers are random (i.e, with probability $1-p$, toss a coin and answer a vertex $x\in (u,v,w)$ that splits away from the other two), then our algorithm **works with any $\frac{1}{2}+\Omega(1)$ success probability**. This discussion is currently in Appendix I, and we will expand the discussion in future versions. - In particular, many of the previous paper that studied $\frac{1}{2}+\varepsilon$ success probability has more structured error that is consistent with the ‘random error’ model in our case. For instance, in the paper “Learning-Augmented Approximation Algorithms for Maximum Cut and Related Problems” (COGLP [NeurIPS’24]) and “Learning-augmented Maximum Independent Set” (DBSW [APPROX’24]), when the oracle is wrong, the answer simply gets flipped, as opposed to looking into the algorithm’s state and give an adversarial answer. In fact, this property is crucial for correctness in the algorithm of “Learning-augmented Maximum Independent Set” (DBSW [APPROX’24]). **In these settings, our algorithm can indeed work with $1/2+\varepsilon$ probability for any $\varepsilon=\Omega(1)$.** - While it might be possible to develop improved algorithms specifically targeting the counterexample in Appendix I, the adversarial noise makes it unclear how such an algorithm would function effectively. For the application of deciding “whether vertex $v$ is in the same partition of $u$”, the sheer *noise* could overwhelm the *signal* in our example. As such, any algorithm with further improvements with *adversarial noise* would need to follow some other frameworks. - Finally, we remark that instead of knowing the exact value of $p$, we only need to know that $p$ has a lower bound, i.e., $p\geq\frac{1}{2}+\Omega(1)$ for some sufficiently large $\Omega(1)$. We thank the reviewer for their valuable insights, and we will incorporate a more detailed discussion about the counterexample and the difference between random and adversarial noise in future versions. We hope our response has addressed your questions about the failure probability settings.
Summary: Brief Summary of the Paper: The paper introduces and studies learning-augmented algorithms for hierarchical clustering (HC) where the type of advice given comes in the form of a splitting oracle. This continues a long line of research on algorithms with ML predictions (or augmented with ML advice) and extends ideas to the context of hierarchical clustering. The problem itself is Hierarchical Clustering where the output of the algo is a tree whose leaves are the given datapoints, and the input is a collections of points with pairwise similarities. The formal framework studied here is under the two objectives for Dasgupta's cost and Moseley-Wang reward objective. These two are complementary objectives (they add up to sth constant based on the input graph) and they try to encode the fact that similar points should be split as late in the hierarchy as possible. The main contributions are 5 algorithms that leverage this abovementioned ML splitting oracle to achieve improved approximation guarantees for Dasgupta’s and Moseley-Wang’s objectives. Importantly, all the algorithms construct partial HC trees (strongly or weakly consistent with the optimal tree) using the oracle, then refine them using techniques like sparsest cut. Sparsest Cut was one of the first methods to be used for obtaining good approximation algorithms for Dasgupta's cost and here the authors show how to exploit it for the framework of advice. A meta-comment here, is that the authors, through this ML oracle splitting aadvice, they are able to go beyond hardness results that hold for the HC objectives in the standard worst-case input (i.e. without the advice). Claims And Evidence: Yes. Methods And Evaluation Criteria: The paper assumes we have access to a splitting oracle; all algorithms are based on this assumption. Appendix J shows that it is possible to learn such an oracle efficiently. Overall, this setting is reasonable for a theory paper, and it is interesting from a practical perspective as well. Theoretical Claims: This is a proof-heavy paper. I checked the main body and proofs from Appendix B (Technical Overview), and overall I believe it is solid. Experimental Designs Or Analyses: N/A Supplementary Material: Appendices include proofs for theoretical claims. The paper has no other supplementary materials. Relation To Broader Scientific Literature: The paper is studying HC which is at the heart of many different applications. So as such, it presents interesting results to many other papers from the litearture. Essential References Not Discussed: I believe that it's better to point out why adversarial noise is important: constructing the optimal tree is easy if the oracle always gives the correct triplet. (check Aho et al. "Inferring a tree from lowest common ancestors with an application to the optimization of relational expressions.", 1981). Studying models where triplets (or even quartets info is given, has been studied in other works listed below, and it would be good to provide some comparison with your setting) Other related works not cited: 1. Chatziafratis, Vaggos, Rad Niazadeh, and Moses Charikar. "Hierarchical clustering with structural constraints." International conference on machine learning. PMLR, 2018. 2. Ghoshdastidar, Debarghya, Michaël Perrot, and Ulrike von Luxburg. "Foundations of comparison-based hierarchical clustering." Advances in neural information processing systems 32 (2019). 3. Alon, Noga, Yossi Azar, and Danny Vainstein. "Hierarchical clustering: A 0.585 revenue approximation." Conference on Learning Theory. PMLR, 2020. 4. Vaggos Chatziafratis, Mohammad Mahdian, Sara Ahmadian. "Maximizing Agreements for Ranking, Clustering and Hierarchical Clustering via MAX-CUT.", AISTATS 2021 5. Jiang, Tao, Paul Kearney, and Ming Li. "A polynomial time approximation scheme for inferring evolutionary trees from quartet topologies and its application." SIAM Journal on Computing 30.6 (2001): 1942-1961. 6. Snir, Sagi, and Raphael Yuster. "A linear time approximation scheme for maximum quartet consistency on sparse sampled inputs." SIAM Journal on Discrete Mathematics 25.4 (2011): 1722-1736. 7. Alon, Noga, Sagi Snir, and Raphael Yuster. "On the compatibility of quartet trees." SIAM Journal on Discrete Mathematics 28.3 (2014): 1493-1507. Including these would better capture the background for introducing your problem. Other Strengths And Weaknesses: Strengths: -This work is the first time that combines Learning-Augmented algorithms with HC, which is creative. The paper is well-written. -The reviewer thinks this is a very natural setting, and the authors do a good job motivating and explaining overall their approach. -The algorithms and proofs are very interesting and constitute solid contributions. -All results are interesting in their own right but also when comparing with some expected hardness of the problems studied and how the authors bypass it. Weakness: -I couldn't find serious weaknesses (other than some minor literature omissions). Overall, this is a nice theoretical contribution that could benefit from a discussion about the lower bounds (samples, runtime etc.) and/or some experimental section, though I don't think this reduces the value of the contribution as is. Other Comments Or Suggestions: -It seems that (1/2 + ε) should be (1/2 - ε) in the context of discussing success probabilities (Appendix I.1, lines 2803-2806). PLease check. -It would be better to include some overview or intuition of the proofs in the main text! Questions For Authors: For constructing weakly consistent trees: if the input has m edges, what would be the runtime of your algo? why is ~O(n^2) is considered near-linear? Do you assume that all edges are present or sth like this? I think you should clarify that. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your careful review and positive evaluation. Our responses are as follows. > Additional References We agree with the reviewer that discussing the related papers and in particular the role of adversarial noise will help demonstrate the challenges with constructing a near-optimal tree. We thank the reviewer for pointing out the additional references; we have added these discussions to the new version. > O(n^2) time vs. ‘near-linear’. Due to the construction of weak partial trees, the run time of the algorithm is $\tilde{O}(n^2)$ even if the input graph has $m=o(n^2)$ edges. However, since the similarity graph on $n$ vertices has $\Omega(n^2)$ edges, our algorithm is still near-linear time. We have clarified this point in the updated version. Furthermore, we believe it is possible to revise our weakly consistent partial tree algorithm to input sparsity runtime, i.e., $o(n^2)$ runtime when the similarity graph is sparse. This could be an interesting direction to pursue as a future step. > Other Comments or Suggestions. We have addressed these points – thanks for the suggestions!
null
null
null
null
null
null
Pfeife: Automatic Pipeline Parallelism for PyTorch
Accept (poster)
Summary: This paper proposes Pfeife, an automated tool that integrates with PyTorch to transparently partition and pipeline large machine learning models across multiple GPUs. It leverages PyTorch's JIT tracing to construct a data-flow graph of the model and then optimizes the pipeline schedule. Experimental evaluations show that Pfeife can outperform existing pipelining tools by up to 22% while handling complex, non-sequential models. Claims And Evidence: - Any specific reasons are provided for TorchTitan's limitation to LLMs only, yet the described techniques could be applied to arbitrary models. Please explicitly state what TorchTitan cannot do that Pfeife can. - Distributed training is not solely due to limited memory capacity but also driven by computation requirements. Techniques like quantization and activation checkpointing reduce memory usage, so using memory constraints as the main motivation for pipeline parallelism is insufficient. - Please specify what kind of data parallelism you are talking about. Traditional data parallelism requires minimal communication (just gradient averaging). When you refer to "it requires full synchronization of the weights across all devices", I suppose you are referring to ZeRO3/FSDP. Please cite them explicitly. - The usage of "model parallelism", "tensor parallelism", and "pipeline parallelism" is inconsistent. For example, "For model and tensor parallelism" implies exclusivity, yet you also say "we focus on pipeline parallelism, which is a particular form of model parallelism." Please ensure consistent usage: when referring to model parallelism, it should include tensor parallelism and pipeline parallelism. - Alpa does not require users to rewrite the model, and it is based on JAX instead of PyTorch. Methods And Evaluation Criteria: Only three models (ViT, LLaMA, and Stable Diffusion) are evaluated. Beyond Stable Diffusion, can Pfeife handle other model types, such as MoE models (e.g., DeepSeek-V3) or state-space models (e.g., Mamba)? Theoretical Claims: - How do you decide which parts of a model can be parallelized? For instance, in a Stable Diffusion pipeline, do users have to manually define which parts go into different pipeline stages, as shown in Listing 1? - Many HuggingFace models cannot be fully traced by TorchDynamo. How do you handle multiple partial graphs? Do you use a custom torch.fx tracer to deal with such models? Experimental Designs Or Analyses: The experimental section is weak, as it includes only a small set of models and configurations. - Pipeline parallelism is typically used in multi-node environments, where tensor parallelism may be used within a node, and pipeline parallelism spans multiple nodes. How is communication modeled in a multi-node environment? Do you have any results on the multi-node setting? - The comparison with automatic parallel frameworks like DeepSpeed and Colossal-AI is insufficient. PiPPy also has an automatic mode for pipeline parallelism (https://pytorch.org/docs/main/distributed.pipelining.html#option-2-splitting-a-model-automatically) —please compare with that as well. Additionally, comparisons to manually optimized systems like TorchTitan and Megatron-LM would show how Pfeife fares against specialized manual implementations. - 3D parallelism is prevalent in nowadays large model training. The experiments only contain DP+PP results. What about including TP? Can your PP method incorporate tensor parallelism? - Only one specific configuration (ViT, 4, 4) shows significant improvements (over 20%). Why is this configuration particularly favorable? Why should Pfeife generalize well to other scenarios? - What is the cost of profiling? For example, if users want to run a job on an 8-node cluster, do they need to firstly launch the profiling job for this 8-node cluster? This would be very time-consuming and costly. - See the below question for C.1, how do you handle faster, newer-generation GPUs? If latency estimations are prone to fail under extreme speed, does that mean Pfeife cannot be generalized to other devices? Supplementary Material: - B.2: Can Pfeife express the DualPipe method proposed by DeepSeek-AI [A] (https://github.com/deepseek-ai/DualPipe)? If not, please clarify any potential limitations. - C.1: The statement "the A100 cluster is too powerful ... we need to limit the performance of the device itself" seems problematic. Does this indicate Pfeife cannot be extended to Hopper or Blackwell GPUs? That would be a significant limitation. - C.2: It appears Pfeife can handle a variety of models, yet results are shown only for ViT and LLaMA in the main text. Why not present other model results? Relation To Broader Scientific Literature: This paper mainly focuses on automating the pipeline parallelism process in distributed training, which can potentially reduce programming efforts for deploying models in a distributed environment. Essential References Not Discussed: Most cited works in the related section are at least two years old. Please incorporate more recent references, such as [B], [C], and [D], and clarify how Pfeife compares or improves upon these newer approaches. They all analyze or consider different kinds of parallelism in a distributed setting. Also, as mentioned in the above question, it would be good to discuss the applicability of Pfeife to DualPipe proposed in [A]. * [A] DeepSeek-AI, "DeepSeek-V3 Technical Report", https://arxiv.org/pdf/2412.19437v1 * [B] Chang Chen, Xiuhong Li, Qianchao Zhu, Jiangfei Duan, Peng Sun, Xingcheng Zhang, Chao Yang, "Centauri: Enabling Efficient Scheduling for Communication-Computation Overlap in Large Model Training via Communication Partitioning", ASPLOS, 2024. * [C] Hongzheng Chen, Cody Hao Yu, Shuai Zheng, Zhen Zhang, Zhiru Zhang, Yida Wang, "Slapo: A Schedule Language for Progressive Optimization of Large Deep Learning Model Training", ASPLOS, 2024. * [D] Ziheng Jiang et al., "MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs", NSDI, 2024. Other Strengths And Weaknesses: See the above sections. Other Comments Or Suggestions: See the above sections. Questions For Authors: See the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would to thank the reviewers' time and feedback. Reviewer aSA1 - Unfortunately, we only have access to the 2 machines we used in the experiments. We don't have budget for more. - Regarding correctness, first we note that we run an order of magnitude more models than most academic papers. It required substantial engineering effort to ensure the results were correct and that we supported most features of PyTorch's torch.compile/TorchFX. These are still under development and thus a moving target. As such, we believe we deserve some credit for being able to run so many models and being open that not all models work (we could have simply left these out). As shown in table 3, only 1 out of 49 models produces incorrect results, which is actually due to a limitation in the model capturing of torch.compile. It's not a bug in our algorithm or implementation. - "Also, the A100 cluster is too powerful to benchmark multiple sets of pipeline parameters (i.e., forward time of each computation node is too short to correctly estimate total latency as we increase loop count or number of devices). Therefore, we need to limit the performance of a device itself." Apologies for the broken English; we'll fix that. What we mean is that some configuration parameters generate forward passes that are too small. At some point, the overhead of spinning a new GPU kernel and the communication starts to dominate, and then the results are not interesting anymore. As we mention in the paper, we prevent the scheduler from generating these configurations since they are unlikely to yield good performance. The slower the device the more interesting configurations we can test. Anyway, this is nothing fundamental, we just wanted to try all configurations we could run. Reviewer wSiC - We run an order of magnitude more models than most academic papers. We believe we deserve some credits for that. We have also run a comprehensive study of the optimal schedules to show that unbalanced schedules are better. We understand we didn't run experiments with a thousands GPUs due to not having them, but our experiments meet or even exceed the bar set by previous pipelining papers. And we do offer new insights into the problem and a new algorithm. - We compared with the systems that are available. GPipe and PipeDream are not. FSDP requires manual work and it overlaps significantly with DeepSpeed. We believe the experiments we did are sound and provide good insights. Reviewer NzCV - Profiler: this is done only once when the model starts and is done per operation. Hence, the overhead is negligible for long-running training sessions. The time it takes is about the same as running the model sequentially. Profiling of inter-device communication is done only once on installation. - Limitations: Pfeife's current implementation only supports natively looped schedules. The algorithm itself is obliviously of the schedule, but we use a template to produce looped schedules. We would like to have a language to specify templates for schedules. That would be a great extension to this work. - Using Pfeife from other frameworks: Pfeife has a frontend part that reads TorchFX graphs. That part is specific to PyTorch, but it's thin; we only need to create a graph with nodes. Then, we have the execution engine that has some bits tied to PyTorch since we implement a backend for torch.compile. But it can as well be replaced. The core, the scheduling algorithm, etc, they are framework independent since they operate over model graphs annotated with costs. Reviewer 8hKh - We don't have results for multi-server deployments. We only have 2 servers that are shared. - PiPPy is a work in progress. It is not ready for comparisons (we tried). - We could compare with TP, but this requires more manual work than we could afford. - Pfeife's scheduling algorithm can handle DualPipe. Our current implementation is hardcoded with a scheduling template that spawns looped schedules only. But that's not a fundamental limitation, we just didn't implement other templates. The scheduling algorithm itself is oblivious since it operates over graphs. Implementing other templates can be done with a dozen lines of code. - Pfeife can work with powerful GPUs. We are sorry for the broken English. What we meant is that we should not slice graphs too thinly for faster devices since then the overhead of launching GPU kernels and communication starts to dominate. - We don't include results for other models mostly because we didn't have enough compute credits. Running a couple of times to test correctness is one thing, but taking stable performance results for multiple configurations and attempting to compare against other frameworks requires a lot more compute. Another reason is that the other frameworks require manual work for each model, and our team is small. We cannot spend months trying to run a dozen models in multiple frameworks. Plus most tools are fragile and crash often.
Summary: The paper introduces Pfeife, a new tool that integrates with PyTorch to provide automatic pipelining of machine learning models. Pfeife aims to address the memory limitations of GPUs when training large models by parallelizing the execution of these models across multiple devices. It leverages PyTorch's tracing JIT compiler (TorchDynamo) to capture a static data-flow graph of a PyTorch model and then automatically generates a pipeline schedule to distribute the operations across devices. The authors claim that Pfeife can run large models that would otherwise not fit on a single device and that it outperforms state-of-the-art pipelining tools. Claims And Evidence: Claim 1. Pfeife can run large models that would otherwise not run due to not fitting in a single device. The paper provides empirical results demonstrating Pfeife's ability to run large models. This advantage is shared by all parallelism techniques, which is not an exclusive benefit of Pfeife. Claim 2. Pfeife outperforms state-of-the-art tools by up to 22% in terms of training throughput.  The paper presents comparative evaluations against other pipeline tools. This number is from Table 1. Claim 3. Pfeife can pipeline non-sequential models such as Stable Diffusion that are not supported by previous pipeline parallelism tools. The paper mentions this capability, and the evaluation includes Stable Diffusion. The authors should provide a more detailed explanation of why previous tools cannot support such models and how Pfeife overcomes these limitations. Methods And Evaluation Criteria: Pfeife leverages PyTorch 2's TorchDynamo to capture the data-flow graph of ML models. It employs a pipeline scheduling algorithm that considers both graph slicing and scheduling parameters to optimize performance. The authors use a cost model based on critical path analysis to estimate running time and memory usage. The evaluation includes experiments with large models, and the primary performance metric appears to be running time or throughput. It would be beneficial to include memory usage as a key evaluation criterion, given that the motivation is to address memory limitations. The choice of benchmark models seems appropriate for demonstrating the capabilities of the system. Theoretical Claims: The paper describes synchronous and asynchronous pipeline parallelism and an algorithm to find pipeline schedules. The authors also investigate optimal workload distribution conditions. The paper uses a cost model based on critical path analysis. This cost model and the scheduling algorithm are central to the paper's contributions. The correctness of the algorithm and the accuracy of the cost model in predicting performance should be rigorously established, possibly with proofs or theoretical analysis in the supplementary material. I suggest to avoid using the term "optimal" unless there are solid proof for that claim. Experimental Designs Or Analyses: The experimental design involves evaluating Pfeife on large ML models and comparing its performance with existing pipelining tools.   The paper mentions using a profiler to measure computation time and memory consumption. More details on the profiling methodology, including the granularity of the profiling (e.g., per-operation), the profiling overhead, and the accuracy of the profiler, would strengthen the analysis.   The authors acknowledge that linearizing the computation graph is an approximation. A more detailed analysis of the impact of this approximation on the generated schedules would be valuable. Supplementary Material: I quickly looked through the supplementary materials A-C. Relation To Broader Scientific Literature: The paper effectively situates Pfeife within the context of existing parallelization techniques for machine learning models, including data parallelism, weight sharding, model parallelism, and tensor parallelism. It provides a good overview of pipeline parallelism and discusses relevant prior work such as PipeDream, GPipe, DAPPLE, AutoPipe, TeraPipe, and others. The authors clearly articulate how Pfeife builds upon and differs from previous approaches, such as by using a more fine-grained partitioning (per operation) and co-optimizing slicing and scheduling. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: * The paper addresses an important problem in deep learning: memory limitations in training large models.   * Pfeife offers an automated and transparent way to pipeline models, which can improve usability.   * The approach of co-optimizing slicing and scheduling is novel and promising.   Weaknesses: * The theoretical claims and the cost model would benefit from more rigorous analysis.   * The limitations of the approach, such as those arising from the linearization of the computation graph, should be discussed in more detail. Other Comments Or Suggestions: n/a Questions For Authors: 1. What are the limitations of Pfeife, and how do you plan to address them? 2. How difficult can we apply Pfeife to other machine learning frameworks and compilers, such as JAX/XLA? Code Of Conduct: Affirmed. Overall Recommendation: 2
Summary: The paper introduces Pfeife, a system integrating with PyTorch's `torch.compile` to automate pipeline parallelism without user intervention. Pfeife partitions models at an operation-level granularity across multiple GPUs, employing a cost model combined with beam search to optimize pipeline scheduling. Key claimed contributions include a novel algorithmic framework allowing simultaneous optimization of pipeline slicing and scheduling, demonstrating that uneven workload distribution with prefetching can improve performance significantly. Evaluations demonstrate Pfeife achieving throughput improvements of up to 22% over DeepSpeed and Colossal-AI across various large models, including ViT-g/14, Llama2-7B, and StableDiffusion-XL. Claims And Evidence: Claims about performance improvement and automation are generally supported through rigorous experiments comparing Pfeife to DeepSpeed and Colossal-AI. Methods And Evaluation Criteria: The evaluation methods are well designed and carried out for demonstrating Pfeife’s practicality and efficiency within PyTorch. However, the limited scope in evaluation criteria—lacking comprehensive comparisons with established frameworks like GPipe, PyTorch FSDP and PipeDream—undercuts the robustness of its methodological validation. Moreover, Pfeife’s reliance on PyTorch's compiler limits the scope of its application, especially considering the reported failure rates on several TorchBench models. Theoretical Claims: The manuscript does not contain substantial new theoretical results but correctly refers to established concepts such as cost models of pipeline parallelism. No formal proofs requiring validation are presented; theoretical claims referenced from literature are appropriate and correctly applied. Experimental Designs Or Analyses: Experimental setups using ViT-g/14, Llama2-7B, and StableDiffusion-XL are adequate for evaluating memory management and throughput in large-scale models: they cover a wide range of recent architectures and are of adequate size. Supplementary Material: I could not find any supplementary material related to the manuscript. Relation To Broader Scientific Literature: Pfeife situates itself within the context of automatic pipeline parallelism, effectively highlighting distinctions from manual annotation-required systems like PiPPy and Varuna. However, the literature review fails to deeply engage with critical predecessors such as GPipe, PipeDream, GraphPipe, and ZeRO-3. Particularly, the omission of GraphPipe (which introduced DAG-based scheduling with significant performance gains) severely undermines the perceived novelty of Pfeife's algorithmic contributions. Essential References Not Discussed: Not necessarily essential, but GraphPipe is of interest to this work: Jeon, Byungsoo, et al. "Graphpipe: Improving performance and scalability of dnn training with graph pipeline parallelism." arXiv preprint arXiv:2406.17145 (2024). Other Strengths And Weaknesses: Strengths: Pfeife's integration with PyTorch’s TorchDynamo presents practical usability. The idea of automatic processing in lieu of manual annotation is fresh and interesting. Weaknesses: Pfeife lacks fundamental novelty - there is a plethora of existing pipeline parallel systems, and in the absence of strong, comprehensive benchmarking results backed by available artifacts, it is not a strong submission to this venue. Other Comments Or Suggestions: The author may consider making the figures more b/w friendly. Questions For Authors: I do wonder if the author has a reason for omitting other well-known pipeline parallel systems like GPipe and PipeDream in the work. Since the manuscript is built on PyTorch, a comparison with FSDP may also be warranted. There is also more recent works on more sophisticated flavors of pipeline parallelism, such as GraphPipe. Code Of Conduct: Affirmed. Overall Recommendation: 2
Summary: This paper presents Pfeife, a tool that automatically performs pipeline parallelization of PyTorch models. Compared to prior methods, the main innovation is that the pipelining is performed in a manner that is completely transparent to the developer, requiring no manual annotations. Specifically, Pfeife is implemented as a backend to $torch.compile$. Starting from a computation graph traced by the PyTorch compiler, Pfeife uses an estimated cost model tuned to the specific hardware, and performs a co-optimizing search that alternates between fusion ("slicing") and pipeline scheduling, aiming to minimize the critical path. Experiments with an 8xA100 (40GB) cluster shows that Pfeife exhibit improvements over DeepSpeed and ColossalAI tools on Llama2-7b and is comparable on SDXL, despite requiring no manual effort. Claims And Evidence: This paper makes two main claims 1) Pfeife performs pipelining with no manual effort. This is generally true: the schedules are generated without any manual efforts, and the schedules are generally (though not always) correct from the coverage tests. 2) Pfeife discovers pipeline schedules that are as good as semi-automated pipeline tools. This is also supported by the experimental results, albeit for relatively "small" models (Llama2-7b and SDXL) Methods And Evaluation Criteria: The proposed methods and evaluation both make sense. It would have been nice to evaluate performance on even larger models / diverse hardware topologies (where pipelining is even more necessary and tricky to get right), but I recognize that this is out of the reach for the majority of researchers. Theoretical Claims: N/A Experimental Designs Or Analyses: The way the experiments are properly designed (both the correctness and performance) and the analysis all make sense. However, I would have liked to see at least one hand-tuned pipeline as a baseline for comparison. Supplementary Material: I have skimmed most of the supplementary material. I focused on Appendix C, Evaluation Details. Relation To Broader Scientific Literature: This paper focuses specifically on pipeline parallelism for multi-GPU workloads. Compared to existing pipeline tools, the main novelty is that the pipelining is done in a completely automatic fashion, without any developer input. The cost model is inspired by DAPPLE (Fan et al., 2021). The search is unique in that, for every pipeline schedule in the scheduling search space, a beam search is conducted for an optimal graph slice; the search space subsumes many (but not all) prior pipeline schedules (1F1B). Essential References Not Discussed: N/A Other Strengths And Weaknesses: I think the contribution of completely automatic pipelining is strong. I can see the ideas here being incorporated into future works. It's also interesting that this work allows SDXL to be pipelined, though I'm unclear about the practical implications The main weaknesses are 1) the co-optimization algorithm is quite complicated and the design decisions receive no justification, either theoretical or practical 2) the experiments are conducted in a relatively limited setting (8x A100s, llama 7b) 3) the correctness is somewhat low. Even being generous in what is counted as a bona fide failure, approximately 10% of the evaluation benchmark set (which consists of relatively small models) yields failures Other Comments Or Suggestions: For Table 4 in Appendix C.3, it would be good to have additional columns that display the errors. Questions For Authors: "Also, the A100 cluster is too powerful to benchmark multiple sets of pipeline parameters (i.e., forward time of each computation node is too short to correctly estimate total latency as we increase loop count or number of devices). Therefore, we need to limit the performance of a device itself." Can you clarify this statement? Code Of Conduct: Affirmed. Overall Recommendation: 3
null
null
null
null
null
null
CombiMOTS: Combinatorial Multi-Objective Tree Search for Dual-Target Molecule Generation
Accept (poster)
Summary: The paper introduces CombiMOTS - a Pareto Monte Carlo Tree Search based approach that is designed to efficiently handle complex multi-objective optimization problems. In particular, the authors tackle the dual-targeting molecule generation problem and evaluate the framework using three protein pairs. For this, they narrow down a large molecule fragment space into a synthesizable one and generate molecules with optimization of docking scores against two proteins. Claims And Evidence: 1. The paper provides strong evidence that CombiMOTS generates molecules with better docking scores, diversity, and novelty. However, synthetic accessibility is not discussed in detail. The authors report SA scores, which are calculated based on the molecular structures, but do not include a retrosynthesis validation study to check if the proposed molecules can truly be synthesized. 2. The authors demonstrate that CombiMOTS consistently outperforms existing methods, however it seems that several important baselines are missing. Additionally, it is not quite clear how the proposed approach compares to baselines in terms of efficiency. Methods And Evaluation Criteria: The proposed CombiMOTS method, real-world cases and evaluation criteria (originality and quality) do make sense for the problem of dual-targeting molecule generation. However, it is not quite evident how the proposed methodology compares to other existing approaches. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: 1. In Table 1 error rates are missing. It would be interesting to see how consistent CombiMOTS is and how it compares to other models in terms of stability of metrics. 2. The authors evaluate their approach in terms of toxicity of generated molecules. It would be more informative to evaluate the impact of toxicity optimization in an ablation study. 3. Although synthetic accessibility score is a common metric to estimate synthetic complexity of a compound, there are more advanced metrics capturing not only structural features of a molecule, but also possible retrosynthetic pathways (e.g., BR-SAscore and RAscore). Authors should consider calculating these metrics if they claim that synthetic accessibility of generated molecules increases. Supplementary Material: The supplementary material seems to be well-structured and useful for understanding the details of the work. Relation To Broader Scientific Literature: The issue of multi-objective fragment-based molecular optimization has been investigated in various studies. The use of Pareto MCTS for molecular generation was studied in [1]. The novelty of the proposed approach lies in the application of Pareto MCTS for dual-targeting molecule generation. [1] Yang, Yaodong, et al. "Enabling target-aware molecule generation to follow multi objectives with Pareto MCTS." Communications Biology 7.1 (2024): 1074. Essential References Not Discussed: FREED++ framework [1] was not cited, although it performs fragment-based molecule generation using RL and was reported to outperform REINVENT in certain scenarios. There are several models for multi-objective drug optimization that use MCTS [2-5] which were not cited. The authors should also consider adding these models as baselines. There is also one recent paper that introduces Pareto optimization for drug design [6]. [1] Telepov, Alexander, et al. "FREED++: Improving RL Agents for Fragment-Based Molecule Generation by Thorough Reproduction." arXiv preprint arXiv:2401.09840 (2024). [2] Suzuki, Takamasa, et al. "Mothra: Multiobjective de novo molecular generation using monte carlo tree search." Journal of Chemical Information and Modeling 64.19 (2024): 7291-7302. [3] Roucairol, Milo, et al. "DrugSynthMC: An Atom-Based Generation of Drug-like Molecules with Monte Carlo Search." Journal of Chemical Information and Modeling 64.18 (2024): 7097-7107. [4] Sun, Mengying, et al. "Molsearch: search-based multi-objective molecular generation and property optimization." Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining. 2022. [5] Qian, Hao, et al. "AlphaDrug: protein target specific de novo molecular generation." PNAS nexus 1.4 (2022): pgac227. [6] Yang, Yaodong, et al. "Enabling target-aware molecule generation to follow multi objectives with Pareto MCTS." Communications Biology 7.1 (2024): 1074. Other Strengths And Weaknesses: Strengths: 1. The research in generation of dual-targeting molecules is topical and important for the broad scientific community. 2. The paper is easy to follow. The plots and figures are visually attractive. Weaknesses: 1. It seems that the idea of using Pareto MCTS for molecule generation is not novel. Although generating dual-targeting molecules is a more complex task, it is very similar to the conventional multi-objective molecule optimization. 2. Some aspects of experimental design and analyses need improvements, e.g., inclusion of more baselines and experimental studies on robustness and efficiency of the presented approach. 3. Not all the relevant papers on this topic were cited suggesting that the authors might not be aware of many significant developments in the field. 4. Some claims are not supported with clear evidence or just vague (e.g., about the proposed method generating interpretable compounds or the better synthetic accessibility of the generated molecules). Other Comments Or Suggestions: 1. The experiment reported in section C.1.1 of the Appendix demonstrates that Pareto optimization leads to better-balanced trade-offs than a naive scalarized reward function. It would be also interesting to investigate how adding Pareto optimization impacts the computational costs required to generate molecules. 2. Comparison of CombiMOTS to baselines in terms of time was not reported, although given the complexity of multi-objective drug optimization, it would be interesting to see how much time is required for the proposed approach to achieve a certain quality of molecules. 3. Exploring the aspect of selective molecule generation could be beneficial, as technically the idea remains the same, while expanding the scope of the study. Additional case studies to this point would strengthen the work by better demonstrating its practical utility. Questions For Authors: 1. Why were only the three baselines considered? Why not consider other SOTA models that use MCTS, Pareto optimization and/or diffusion? 2. The reported 11s/rollout time means that full runs take days. How does this compare to the other methods in terms of efficiency? 3. What does “interpretable” compounds in line 96 mean? What are the statistically solid results supporting that claim? 4. Is it possible to provide a retrosynthetic evaluation of the generated molecules? 5. Is it possible to calculate standard deviations for metrics in Table 1 (e.g., after several runs with different random seeds)? 6. Is it possible to compare CombiMOTS with and without toxicity optimization to see the differences more clearly? 7. Is it possible to compare CombiMOTS with baselines in terms of time? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer **KAF5**, thank you for your constructive suggestions. We address your concerns below and through https://anonymous.4open.science/r/CombiMOTS-0FEB. Concerns - ### 4. Synthesizability Metrics and Retrosynthetic Evaluation We assess CombiMOTS's synthesizability against baselines to justify using Enamine rather than post-hoc evaluation. Synthesizability metrics can be: - **Rule-based** (e.g., SAscore): Efficient but fail to capture real-world constraints. - **Retrosynthetic-based**: Recursively breaking down a target into industry-available precursors. However, it is expensive and limited to reference stocks (e.g.,Aizynthfinder [1] on the ZINC database). We use the opposite workflow (from precursors, find optimal solutions within bounded iterations - see reply `H` to **pmFQ**) - **Learning-based** (e.g., RAscore, BR-SAscore): Context-adaptable but data-dependent. > Note: Enamine’s 80% synthesis success rate and SyntheMol’s experimental validations [2] motivate practical synthesis over mere metric evaluation. #### Comparative Analysis - **Density Distributions:** Density distributions (in repo) include: - Generated molecules (all models) - Enamine building blocks (expects high synthesizability) - COMPAS-3 dataset [3] (contains hard-to-synthesize polycyclic compounds) - **Observations:** - COMPAS-3 is deemed synthesizable by SA & RAscore but not by BR-SAscore. - Enamine excels on RAscore but performs similarly or worse on other metrics. - MARS unrealistically outperforms Enamine on SA/BR-SAscore. - CombiMOTS outperforms baselines on RAscore but aligns with/worsens on others. #### Aizynthfinder [1] Retrosynthetic Study Among 1544 CombiMOTS molecules, 162 (10.42%) are not solved by Aizynthfinder—due to its ZINC dependency (example in repo). These inconsistencies among metrics underscore why leveraging available precursor data is relevant to our motives. >B) Selective molecule generation We explore a workflow for CDK7 inhibitors as a case study. Off-targets (CDK1, 2, 9, 12, 13) are reported in [4,5]. CombiMOTS could be adapted by: - Searching compounds maximizing CDK7 activity and minimizing off-target activity. - Narrow the search space on known active scaffolds/warheads. - Post-hoc filtering (medicinal chemistry, in-silico validation). While preliminary and beyond the scope of our current work, repository statistics summarize this workflow. Other Questions - >1. Additional Baselines We thank the reviewer for suggesting references to integrate in future versions. We note distinctions with CombiMOTS: - DrugSynthMC [6] generates drug-like molecules using n-grams but lacks support for protein target setting. - ParetoDrug [7] extends PMCTS to protein-specific generation via atom-based, Lmser transformer models. Though cited, this method remains single-target (no multi-target code). - Mothra [8] is PMCTS-based but limited to single-target RNN decoding. - MolSearch [9] tackles on dual-target optimization rather than de novo generation by combining PMCTS with fragment-based "design moves". We adapt Mothra and MolSearch to the GSK3B-JNK3 task. Repository metrics/plots show CombiMOTS's superiority across all metrics, likely due to: - Mothra relying on single token additions. - MolSearch's dependence on learned augmentation rules - Both require extensive sampling for Pareto convergence (see reply `H` to **pmFQ**). - In contrast, CombiMOTS's node structure leads to products in a single reaction, leading to faster convergence. >2. Efficiency & 7. Runtime One CombiMOTS tree traversal runs in $O(1)$ time, with hash table lookups (precomputed scores) being the main overhead when creating child nodes. However, oracles are called whenever a product is found: the computational overhead is bounded by the slower operation between $O(n_{children})$ and $\text{Oracles(product)}$. Runtimes of all evaluated models are in the repository. Notably, competing PMCTS methods are significantly slower. >3. Interpretable compounds. In Fragment-Based Drug Design, it means explaining molecular property through the fragments it is built upon. Here, FGIB helps find more dual-actives (see reply `4` to **6Myy**) >5. Error rates of Table 1. We update Table 1 with error rates, p-values, and separate evaluation of baselines for dual-active generation. CombiMOTS, as a search algorithm, returns all found products, making comparisons with generative models irrelevant. >6. Toxicity Optimization We add plots in repository for the corresponding ablation study to conclude on CombiMOTS's ability to optimize non-toxicity. Reference - [1] Genheden et al., *J. Cheminform.*, 2020 [2] Swanson et al., *Nat. Mach. Intell.*, 2024 [3] Wahab and Gershoni-Porane, *PCCP*, 2024 [4] Constantin et al., *Br. J. Cancer*, 2023 [5] Olson et al., *Cell Chem. Biol.*, 2019 [6] Roucairol et al., *J. Cheminform.*, 2024 [7] Yang et al., *Commun. Biol.*, 2024 [8] Suzuki et al., *JCIM*, 2024 [9] Sun et al., *KDD'22*, 2022 --- Rebuttal Comment 1.1: Comment: I appreciate the authors making every attempt to address all the questions and concerns I had expressed. Thanks for the additional experimental results you included in the repository. While I find it difficult to critically assess all the new evidence given the time constraints, I'm inclined to raise my score. I do not have any further questions. --- Reply to Comment 1.1.1: Comment: Dear reviewer **KAF5**, we would like to express once again our gratitude towards your great advice and observations during the entire review process ! Including answers to your concerns in future versions will surely improve our work's quality and standpoint on the topic. We greatly appreciate how you helped us underline multiple aspects of our method such as its relevance on synthesizability, how it compares against PMCTS baselines, its runtime or how it could be applied to other tasks. We also thank you for increasing your score!
Summary: The paper proposes CombiMOTS, a combinatorial multi-objective tree search framework for dual-target molecule generation. It integrates Pareto Monte Carlo Tree Search (PMCTS) with fragment-based synthesis-aware generation. Key contributions include: (1) A reduced synthesizable fragment space via target-aware building blocks; (2) A Pareto-optimized MCTS algorithm for balancing conflicting objectives (target affinity, QED, SA); (3) Empirical validation showing superior performance in generating diverse, synthesizable molecules with balanced properties compared to baselines. Claims And Evidence: Claims about "superior diversity" (Table 1) lack statistical significance tests (e.g., p-values). Toxicity prediction (Section 6.2) uses an ensemble of Chemprop models trained on a small dataset (1,478 molecules), risking overfitting. Methods And Evaluation Criteria: Use of industry-ready Enamine REAL Space ensures practical synthesizability. Pareto optimization aligns with multi-objective drug design. However, docking scores are prioritized over QED/SA without justification. This may bias results toward unrealistic molecules. Theoretical Claims: The Pareto UCB formula (Equation 8) combines scalar exploration terms with vectorized rewards. This lacks theoretical grounding—how does it guarantee Pareto optimality? Experimental Designs Or Analyses: Concern: Baseline implementations (e.g., REINVENT) use different objectives (QED-only vs. docking scores), making comparisons unfair. Issue: Computational cost of CombiMOTS (25s/rollout for 6 objectives) is not compared to baselines. Supplementary Material: Supplementary material is not included. Relation To Broader Scientific Literature: The work builds on Swanson et al. (2024) but extends MCTS to multi-objective settings. However, recent Pareto MCTS variants (e.g., Yang et al., 2024a) are not discussed. Essential References Not Discussed: Pareto optimization in drug design: Medina-Franco et al. (2013) proposed multi-target optimization frameworks. Multi-objective RL: Yang et al. (2024a) applied Pareto MCTS to molecule generation but is uncited. Other Strengths And Weaknesses: Weaknesses: Methods are well-described, but theoretical justification for Pareto UCB is unclear. Other Comments Or Suggestions: Page 2: "Fromer & Coley, 2023; Luukkonen & Maagdenberg, 2023" lack full references. Algorithm 1: Line 16 uses "max(v, selected.P)"—unclear if "max" applies to vectors. Questions For Authors: How does the Pareto UCB formula ensure convergence to Pareto-optimal solutions? A theoretical analysis is needed. Why prioritize docking scores over QED/SA in objectives? Would including all metrics (as in Appendix C.1.2) improve practicality? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer **pmFQ**, thanks for providing valuable feedback to our work! The requested theoretical analysis on PUCB significantly clarifies convergence speed and solution optimality. We address your concerns and questions below and through https://anonymous.4open.science/r/CombiMOTS-0FEB. --- Concerns - >A) "Table 1 lacks statistical significance." Following you and **KAF5**, we update Table 1 to improve robustness and relevance. See reply `5` to **KAF5**. >B) Overfitting concerns - Toxicity prediction We thank the reviewer for highlighting the small size of ClinTox; we agree models trained on limited data lack real-world accuracy. Our use of ClinTox is intended primarily as a case study due to its clinical relevance (toxicity assays on FDA-approved drugs), despite toxicity data being generally scarce. Nevertheless, ClinTox remains a critical resource for evaluating machine learning models in drug toxicity prediction [1]. We will explicitly underline this limitation in future revisions. >C) Unfair objective settings across models. `Appendix C.2` reimplements REINVENT and MARS with our settings (docking score and bioactivity prediction) to investigate alignment of objectives. If needed, we will release our implementations of adapted baselines. >D) Compared computational cost to baselines. See reply `2` to **KAF5**. >E) Broader literature: "Recent Pareto MCTS variants (e.g., Yang et al., 2024a) are not discussed." Thank you for highlighting this. Our manuscript briefly cites Yang et al. (2024a) (lines 81-82); we further discuss its broader relevance (see reply `1` to **KAF5**). >F) Page 2: "Fromer & Coley, 2023; Luukkonen & Maagdenberg, 2023" lack full references. Thanks! We will complete references. >G) Algorithm 1: Line 16 uses "max(v, selected.P)"—unclear if "max" applies to vectors. Algorithm 1 omits details for readability. Algorithm 2 (line 17) specifies "max" as element-wise, common in vectorized Pareto MCTS [2]. --- Asked Questions - >H) **Theoretical Analysis**: Computational Cost & Pareto-optimal Convergence. Our manuscript had typos in `Equation 8` and `Algorithm 2` (l.36) on the PUCB formula, omitting a constant of 4. We correct it to: $\vec{PUCB(n)} \gets \frac{\vec{R(n)}}{N(n)}+C*\vec{Oracles}\sqrt{\frac{ln(D)+\textbf{4}*ln(1+N(p))}{1+N(n)}}$. We provide theoretical grounding by adapting theorems from [ParetoMCTS]. Due to space limits, we summarize key statements below: Under assumptions satisfied by our D-dimensional problem setting, - **Theorem 1**: Consider a node with a child node $v_k$ visited $T_k(n)$ times in $n$ steps. If $v_k$ is a sub-optimal node, $\mathbb{E}(T_k(n))$ is logarithmically bounded. $\forall \xi>0, \exists N_0(\xi) \in \mathbb{N^*}, \text{such that} \ (n \geq N_0(\xi)) \implies \mathbb{E}[T_k(n)] \leq \frac{8ln(n) + 4ln(D)}{(1-\xi)^2 * (\underset{k,d}{min}\Delta_{k,d})^2} + N_0(\xi) + O(1),$ where $(\underset{k,d}{min}\Delta_{k,d})$ is the minimum reward gap (across D dimensions) between $v_k$ and its most dominant node in the Pareto front (i.e. the "farthest away"). This bound implies: (i) the logarithmic regret $\alpha ln(n)$ implies that sub-optimal node selections decrease over time. (ii) The denominator depends on the minimum reward gap, implying lower regret when sub-optimal nodes are more distinguishable from the Pareto front. (iii) Increasing the number of objectives D also increases regret by a logarithmic factor $\beta ln(D)$, but makes it harder to distinguish sub-optimal and Pareto nodes, directly impacting (ii). Thus, (ii) and (iii) motivate us to wisely select our objectives. - **Theorem 2**: With $I_t$ the selected child node at step $t$ and $(C,\rho$) constants, $\mathbb{P}(I_t$ is sub-optimal$)\leq Ct^{-\frac{\rho}{2}\left(\frac{ \min_{k,d}\Delta_{k,d} }{36}\right)^2} \xrightarrow{t\infty}0.$ This implies the guaranteed convergence towards optimal nodes at a polynomial rate. In our work, we aim to approach optimal solutions in a reasonable time, thus the need to properly select objectives, efficient oracles and search space size (ablations asked by **6Myy**). >I) Why prioritize Docking Scores over QED/SA? We list our motives below: - Fingerprint-based predictions lack structural interpretability (e.g. RationaleRL, MARS, [MolSearch suggested by **KAF5**]) compared to *in-silico* docking simulation. - Experiment `C.1.2` suggests that using all objectives leads to worse tradeoffs due to the larger Pareto fronts (now supported by question `H`). An additional experiment (50k rollouts optimizing QED/SA over docking scores) is reported in the repository. We observe that QED slightly increased at the cost of much worse docking scores but the inverse is not true: not considering QED/SA during search still converges towards good scores. - As we use Enamine REAL Space, we discuss the non-necessity of SA score (see reply `4` to **KAF5**). --- Reference - [1] Tran et al., *JCIM*, 2023 [2] Yang et al., *Commun. Biol.*, 2024
Summary: The paper introduces CombiMOTS, a Pareto Monte Carlo Tree Search (PMCTS) framework for generating dual-target molecules, which are molecules that can interact with two target proteins simultaneously. The authors argue that existing methods often simplify the dual-target optimization problem by linearly combining objectives and neglecting synthetic planning. CombiMOTS addresses these challenges by exploring a synthesizable fragment space and using vectorized optimization constraints to encapsulate target affinity and physicochemical properties. The authors claim that experiments on real-world datasets demonstrate that CombiMOTS generates novel dual-target molecules with high docking scores, enhanced diversity, and balanced pharmacological characteristics. The core idea combines PMCTS with a fragment-based approach and a focus on synthesizability using the Enamine REAL Space. Claims And Evidence: CombiMOTS is able to perform dual-target optimization. The authors claim that there are intricacies in dual-target optimization that cannot be captured due to simplification by "linear combination of individual objectives", but there are other scalarizing methods (by hypervolume, or distance to utopia point). The results in Table 1 seem to indicate the metrics used there are not particularly useful, with almost all models achieving scores of >99%. Evaluating generative model quality by validity, uniqueness, novelty, and diversity have been noted as problematic (https://www.sciencedirect.com/science/article/pii/S1740674920300159). Methods And Evaluation Criteria: It would strengthen the claims of the pareto MCTS being superior if the optimal tradeoffs were quantified, rather than in distribution plots of Figure 3 and 6. A score that quantifies the optimality of the pareto-optimal point? From the plots, it looks like RationaleRL is quite successful as well in the multi-objective task. Theoretical Claims: There are no theoretical claims made. Experimental Designs Or Analyses: The design is logical and sound, however, may be limited in novelty. The authors used a fragment-based tree search to generate synthesizable molecules, and added multi-objective Pareto optimal score, with no significant advancements in either methods. However, I appreciate the non-triviality of implementing this workflow. Additionally, tt is also not clear that CombiMOTs are better than the models presented. Supplementary Material: I reviewed the additional experiments section. Relation To Broader Scientific Literature: The work builds upon tree search generation methods for objective oriented molecular generation. Essential References Not Discussed: - Multiobjective benchmarking for generative models - PMO (https://arxiv.org/abs/2206.12411) - Pareto-optimal generation (https://openreview.net/pdf?id=sWRZxIcR8qK) Tree-search methods based on fragments and rules based generation have been explored before. - https://arxiv.org/pdf/2110.06389 - https://proceedings.neurips.cc/paper_files/paper/2020/file/4cc05b35c2f937c5bd9e7d41d3686fff-Paper.pdf - https://www.nature.com/articles/s42004-024-01133-2 Other Strengths And Weaknesses: Strengths: - Clear problem definition and motivation. - Focus on synthesizability is key for molecular generation - Easy to read and follow Weakness: - It is not clear that the claims of CombiMOTs producing dual-target molecules is better than other methods. For example, is it possible to perform a similar case study with a candidate generated by RationaleRL? A metric to distinguish the pareto-optimal solution being better from CombiMOTS would be beneficial. - The idea of using synthesizability routes and fragments to give realistic molecules is not new, particular the combination of tree-search methods with retrosynthetic routes (see early response). - Limited novelty - Other scalarizing methods that consider pareto-optimization are not considered. Other Comments Or Suggestions: The discussion on pareto optimization can be shortened. The comparison to scalarized results CombiMCTS are important to demonstrate the effect of PUCB optimization, and should be in the main text. Questions For Authors: Please see comments above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer **h6f3**, thank you for your constructive review which improved our perspective! Particularly, your comments provided more clarity regarding the Pareto superiority against baselines. We address your concerns below and through https://anonymous.4open.science/r/CombiMOTS-0FEB. --- Concerns & Replies - >A) On "linear combination of individual objectives" and other scalarizing methods. We acknowledge the misuse of “linear combination” as the sole example of scalarization method. We revise our statement by generalizing the advantage of vectorized pareto optimization over scalarized methods – including those mentioned by the reviewer - for the reasons below: - Hypervolume assesses the quality of a solution set relative to a reference set: in the dual-target paradigm, baseline sets are either scarce or non-existing, making it hard to use in our task. `Appendix C.3` hints at the capability of CombiMOTS to still effectively find solutions in case of data scarcity. - Following your suggestion, we assess each model's ability to discover Pareto optimal molecules by computing their respective first Pareto front and reporting their average R2-distance to utopian point (normalized in range [0 to 2] - lower is better), as well as the size of the Pareto optimal set in parenthesis. |Model/Task|GSK3B-JNK3|EGFR-MET|PIK3CA-mTOR| |-|-|-|-| |**CombiMOTS**|0.8075 **(174)**|**0.8419 (268)**|0.8143 **(210)**| |**MARS**|**0.7830** (102)|0.8845 (142)|0.8144 (89)| |**RationaleRL**|0.8307 (137)|0.9388 (148)|0.8852 (113)| |**REINVENT**|0.8223 (93)|0.8518 (177)|**0.8035** (174)| The main takeaway is that even though all methods present a good overall score, CombiMOTS consistently finds **more Pareto optimal solutions** than others. However, this scalarized score doesn't allow identifying inconsistencies among baselines. To clarify this point, we plot radar charts across all metrics used in the manuscript and publish them in the `rebuttal` folder of the anonymous github. >B) "Selected metrics don't seem useful in Table 1." Like other visualizations, the role of Table 1 is to highlight inconsistencies of the baselines across metrics (including distribution plots): in all tasks, all baselines perform poorly in at least one metric. Only CombiMOTS performs equally or better than baselines regardless of the task/metric. About validity: even if only few generations are invalid, it demonstrates a critical flaw in the model not respecting fundamental chemical rules. CombiMOTS use SMARTS-based chemical templates, conformed to real reactions. As requested by reviewers **pmFQ/KAF5**, we revise Table 1 to be statistically more robust and meaningful. See reply `5` to **KAF5** for details. >C) "Selected metrics may be problematic[...]" We sincerely thank the reviewer for raising the concern about the limitations of using such metrics. In the mentioned paper, we acknowledge that "even the best in silico scores remain fruitless if the suggested compounds cannot be synthesized." Ideally, synthesizing molecules and performing assay experiments would provide the most reliable validation; however, we also agree with reviewer **6Myy** as this typically exceeds the scope of machine learning papers. Thus, like many existing works in this field, we rely on widely-used computational metrics primarily to fairly compare methodologies. >D) "It looks like RationaleRL is quite successful as well." We list the weaknesses of RationaleRL in our experiments: - GSK3B-JNK3: Low uniqueness and lowest diversity; - EGFR-MET: Low diversity, weak binding affinity with MET (Fig.6A) and high SA score; - PIK3CA-mTOR: Low uniqueness, novelty, diversity and fairly low QED score (Fig.6B). We discuss the low diversity of RationaleRL. Its workflow revolves around (i) extracting and merging "rationales" from high-property compounds, (ii) training & finetuning a graph completion module to obtain novel molecules from the merged rationales. Generation occurs by autoregressively complete the same structural cores, thus leading to poor diversity. We plot and add (to the repository) t-SNE figures of generated compounds from all models: RationaleRL generations are clustered. In drug design, RationaleRL is better suited for Lead Optimization rather than Hit Discovery. >E) Suggested References We thank the reviewer for suggesting related material. We will integrate similar topics to our related work. In the meanwhile, we comment on how our work differs: Multiobjective JANUS focuses on molecular optimization, not de novo design. [1] SynNet [2] and MEEA* [3] tackle synthesis planning, which answers: "How can we synthesize this molecule given available stock?" This is the inverse of our approach, which asks: "Given available stock, what optimal products can be created?" See reply `4` to **KAF5** for more about our motives. --- Reference - [1] Kusanda et al., *NeurIPS'22*, 2022 [2] Gao et al., *arXiv*, 2021 [3] Zhao et al., *Commun. Chem.*, 2024 --- Rebuttal Comment 1.1: Comment: Thank you for the author rebuttal. A) Thank you for the work in producing these results. B/C) I am not requesting the synthesis or experimentation of any chemical compounds (I agree this would be very out of scope for an ML conference). However, the authors should add a statement about the pitfalls of using metrics such as validity, uniqueness, novelty, and diversity. Such metrics can be easily hacked, for example, by adding a couple "C"s into the smiles string: [AddCarbon model] (https://www.sciencedirect.com/science/article/pii/S1740674920300159) Thank you for the additional work. I will increase my score. --- Reply to Comment 1.1.1: Comment: Dear reviewer **h6f3**, we reiterate our thanks for your valuable advice and perspective on the used metrics, and for your constructive review overall on Pareto superiority ! We will make sure to discuss the pitfalls of such commonly used metrics in future versions. We share your opinion as this tackles broader applications in drug discovery - not only limited to our work - that should also be addressed in future works. We thank you again for your thoughtful feedback during the entire review process. We also thank you for increasing your score!
Summary: This paper introduces CombiMOTS, a novel method for dual-target molecule generation. It addresses the limitations of existing approaches, which often simplify the multi-objective nature of the problem into a linear combination of objectives and may not consider synthesizability. CombiMOTS leverages Pareto Monte Carlo Tree Search (PMCTS) within a fragment-based drug discovery framework. Claims And Evidence: The claims made in the submission are generally well-supported by the evidence provided. * **Claim:** CombiMOTS outperforms existing methods in generating dual-target molecules with better trade-offs between target engagement and molecular properties. * **Evidence:** Table 1 and Figures 3, 6, and the supplementary figures provide strong quantitative evidence. CombiMOTS consistently achieves higher novelty and diversity scores, while maintaining competitive or superior docking scores compared to the baselines. The density plots visually demonstrate the superior trade-offs. * **Concerns:** None major. * **Claim:** The use of Pareto optimization is crucial for achieving these improved trade-offs. * **Evidence:** The comparison with the scalarized version of CombiMOTS (Figure 7 and Appendix C.1.1) provides compelling evidence. The scalarized version performs worse in terms of finding balanced properties, highlighting the importance of the Pareto approach. * **Concerns:** None major. * **Claim:** The fragment-based approach and use of the Enamine REAL Space ensure synthesizability. * **Evidence:** The methodology section clearly describes the process of mapping fragments to REAL Space building blocks and using reaction templates. The reported SA scores are generally low, indicating good synthesizability. The comparison with Swanson et al. (2024), who focused on the REAL space. * **Concerns:** While SA score is a good proxy, it's not a perfect guarantee of synthesizability. Experimental validation would *strengthen* this claim, but is beyond the scope of a typical ML paper. * **Claim:** CombiMOTS has practical utility, analysis of Case Studies * **Evidence:** Analysis of case studies for compounds generated for GSK3B-JNK3. * **Concerns:** None. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally appropriate for the problem. * **Methods:** The use of PMCTS is a well-established technique for search and optimization. Adapting it to a combinatorial, multi-objective, and fragment-based setting for drug discovery is novel and well-motivated. The use of the Enamine REAL Space is a good choice for ensuring synthesizability. * **Evaluation Criteria:** The chosen metrics (validity, uniqueness, novelty, diversity, docking score, QED, SA) are standard and relevant for evaluating generative models in drug discovery. Using multiple target pairs (including newly curated datasets) strengthens the evaluation. Comparing against relevant baselines (RationaleRL, REINVENT, MARS) is also appropriate. * **Concerns** The justification for choosing to prioritize the molecular docking score, other than the other metrics, is not solid. Theoretical Claims: The paper does not present significant theoretical claims in the form of theorems and proofs. The theoretical underpinning is the application of Pareto optimality and MCTS, which are well-established concepts. The description of Pareto dominance and Pareto fronts is correct. Experimental Designs Or Analyses: The experimental designs are generally sound. * **Datasets:** Using established datasets (GSK3β-JNK3) and curating new ones (EGFR-MET, PIK3CA-mTOR) according to established protocols is good practice. The data curation process is described in detail in the appendices. * **Baselines:** The chosen baselines are relevant and represent different approaches to molecule generation. The authors made efforts to reproduce the baselines in their best reported settings and even adapted them to use docking scores as objectives for a fairer comparison. * **Metrics:** The use of multiple metrics to evaluate different aspects of the generated molecules is comprehensive. * **Ablation Studies:** The comparisons with the scalarized version of CombiMOTS and the six-objective version (in the appendix) are valuable ablation studies that demonstrate the importance of the Pareto approach and provide insights into the impact of adding more objectives. * **Concerns:** Ablation study of the Fragment Extraction and Search Space Reduction is missing. Supplementary Material: I reviewed the supplementary material. It includes: * Detailed algorithm descriptions (Algorithm 1 and 2). * Implementation details (objective functions, oracle settings, data curation). * Additional experimental results (ablation studies, property distributions). * Data statistics. * Case study details. Relation To Broader Scientific Literature: The paper is well-situated within the broader scientific literature on drug discovery, generative models, and multi-objective optimization. It builds upon: * **Fragment-Based Drug Discovery (FBDD):** The paper clearly positions itself within the FBDD paradigm and cites relevant work. * **Generative Models for Molecules:** It cites and compares against relevant generative models, including VAE-based, RL-based, and MCMC-based approaches. * **Multi-Objective Optimization:** It correctly references Pareto optimization and its application in other fields. * **Monte Carlo Tree Search (MCTS):** It builds upon existing work on MCTS and PMCTS, citing relevant papers. Essential References Not Discussed: Essential related works have been discussed. Other Strengths And Weaknesses: **Strengths:** * **Novelty:** The combination of PMCTS with a combinatorial, fragment-based approach for dual-target molecule generation is novel and well-motivated. * **Effectiveness:** The experimental results demonstrate that CombiMOTS outperforms existing methods on the chosen tasks. * **Synthesizability:** The focus on synthesizability through the use of the Enamine REAL Space is a significant strength. * **Comprehensive Evaluation:** The use of multiple metrics and target pairs, along with ablation studies, provides a thorough evaluation. **Weaknesses:** * **Computational Cost:** The paper acknowledges the computational cost of PMCTS, particularly with more objectives. While the authors provide runtimes, a more detailed discussion of scalability and potential optimizations could be helpful. * **Justification of Docking score:** The paper needs more justification, why molecular docking score is the main objective. * **Limited Scope:** The evaluation is limited to three dual-target pairs and a specific set of objectives. While this is understandable, it would be interesting to see how CombiMOTS performs on a wider range of tasks and with different objective functions. * **Missing Ablation study**: The ablation study is missing for Fragment Extraction and Search Space reduction steps. Other Comments Or Suggestions: Reference to the **Weaknesses** part. Questions For Authors: 1. **Computational Cost and Scalability:** Can you elaborate on the scalability of CombiMOTS, particularly as the number of objectives or the size of the search space increases? Are there any specific optimizations you have considered or plan to explore to improve efficiency? How does the computational cost compare to the baselines (even if approximate)? 2. **Objective Function Design:** You prioritize docking scores over QED and SA. Could you elaborate on the rationale behind this choice? What are the potential limitations of this prioritization, and how might it affect the generated molecules? Have you experimented with different weightings or combinations of objectives, and if so, what were the results? 3. **Generalizability:** How do you envision CombiMOTS being applied to other drug discovery tasks beyond dual-target inhibitor design? Could it be adapted to handle different types of targets (e.g., protein-protein interactions) or different objective functions? 4. **Fragment Extraction:** Could you elaborate on the choice of FGIB for fragment extraction? Were other fragment extraction methods considered, and if so, why was FGIB chosen? What are the potential limitations of the chosen fragment extraction method? 5. **Search Space Reduction.** How sensitive is the performance of CombiMOTS to the Tanimoto similarity threshold used for search space reduction? Did you perform any experiments to optimize this threshold? 6. **Ablation Study**: The ablation study of the Fragment Extraction and Search Space Reduction steps, are important to show the efficiency of the whole pipeline. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer **6Myy**, we appreciate your insights which helped improve our work! Particularly, ablation studies allowed us to strengthen our claims. We address your concerns below and through https://anonymous.4open.science/r/CombiMOTS-0FEB. --- Concern - > A) "SA score doesn't guarantee perfect synthesizability". See reply `4` to **KAF5**. --- Asked Questions - > 1. **Computational Cost and Scalability** - *#Objectives*, *Runtime* & *Solution Convergence*. > 2. **Objective Function Design** - Docking Score over QED/SA. &rarr; See reply `I` to **pmFQ** and `2` to **KAF5**. > Have you experimented with different weightings[...]? To clarify the use of weights, CombiMOTS uses property vectors to compute Pareto fronts and sample nodes: weights don't affect the relative difference between nodes. > 4. **Fragment Extraction** - FGIB & 6. **Ablation Study** FGIB extends the widely used BRICS [1] method by breaking molecules into retrosynthetically interesting fragments, which is preferred over rule/frequency-based methods when combined with Enamine REAL Space. Some works adapt BRICS to needs with a fixed goal (e.g. connectivity-awareness [MiCaM [2]], ADMET explanability [pBRICS [3]]), thus not suited for the dual-inhibition task where target proteins are user-defined. Our goal is to capture high-property fragments for any **target property**. To further justify our choice, we report results from 10,000 rollouts on the GSK3B-JNK3 task when replacing FGIB with naive BRICS and MiCaM, a connection-aware motif-mining approach to decompose known active compounds. |Method|#BuildingBlocks|#PossibleProducts|#Dual-Actives| |-|-|-|-| |BRICS|4,430 (747 not found by FGIB)|~1.7M|1,815/12,559 (**14.45%**)| |MiCaM|12,274 (8,428 not found by FGIB)|~26M|1,445/13,263 (**10.90%**)| |FGIB (base)|14,366 (10,683 unique vs. BRICS, 10,520 vs. MiCaM)|~25M|3,662/15,423 (**23.74%**)| FGIB successfully identified goal-specific fragments, resulting in a **significantly higher rate** of "dual-active" compounds predicted to interact with both proteins. We report metrics/distribution plots in the `rebuttal` folder of the anonymous github. All methods exhibit similar performance, which is sound as FGIB only impacts the search space through the selection of **initial blocks**: MCTS objectives and convergence speed are "as good", but the attainable space contains less dual actives. > B) Potential limitations of FGIB. FGIB uses MLP-based modules to learn data features: its performance depends on data quality. However, our ablation study (`Appendix C.3`) demonstrates that combined with Pareto MCTS, data scarcity is alleviated to some extent. > 5. **Search Space Reduction** - Thresholds & 6. **Ablation Study** - Search Space Reduction We investigate the use of various thresholds and report results from 10k rollouts on the GSK3B-JNK3 task for lower and higher values of 0.3, 0.5 & 0.6. |Threshold|# Blocks|# Possible Products|# Dual-Actives| |-|-|-|-| |0 (Full Space)|139,493|>31B|(not done)| |0.3|54,811|~478M|2,833/13,556 **(20.90%)**| |0.4 (base)|14,366|~25M|3,662/15,423 **(23.74%)**| |0.5|3,737|~1.1M|1,380/11,632 **(11.86%)**| |0.6|858|~43K|317/9433 **(3.36%)**| You may refer to metrics and figures added to the repository. We make three observations: - The threshold greatly affects the search space size due to the small nature of FGIB fragments and Enamine blocks. Tanimoto metric being fingerprint-based, changing a small motif is relatively impactful. - For higher thresholds, the search space is too small to allow good exploration. There is a significant drop in #dual-actives, and metrics also show that using thresholds above 0.6 yield a drop in diversity (88.67→78.73%) and consistency across molecular properties. - For lower thresholds, CombiMOTS has "more room to explore" but finds worse tradeoffs across QED and #dual-actives. As more reactions are possible, Pareto fronts are larger and convergence to optimal solutions is slower (see `H) Theroretical Analysis` to reviewer **pmFQ**). Optimal thresholds are specific to the target properties, but our experiments suggest that a search space of magnitude 10M yields better convergence within a reasonable budget (~10k rollouts). > 3. Generalizability While dual-target inhibition is our main task, we design CombiMOTS can be adapted by customizing property oracles as objectives, or by targeting specific biological systems, narrowing the search space on fragments of interest. Among applications, reviewer **KAF5** and yourself mentioned: - Protein-Protein Interaction (PPI) modulator discovery, where search can aim to uncover molecules tailored to PPI interface [Hot2Mol [4], GENiPPI [5]]. - Toxicity Optimization (see reply `6` to **KAF5**). - Molecular Selectivity (see discussion `B` with **KAF5**). --- Reference - [1] Degen et al., *ChemMedChem*, 2008 [2] Geng et al., *arXiv*, 2023 [3] Vangala et al., *JCIM*, 2023 [4] Sun et al., *bioRxiv*, 2024 [5] Wang et al., *J. Cheminform.*, 2024 --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal and for conducting the additional experiments regarding fragment extraction and search space reduction thresholds. I appreciate the effort to provide these ablation studies (addressing Questions 4, 5, 6). They offer helpful clarification on the impact of FGIB compared to other methods and the sensitivity to the Tanimoto threshold, providing further insight into the pipeline's design. However, while these clarifications are welcome, my overall assessment of the paper remains consistent with my initial review. The concerns regarding the justification for prioritizing docking scores (Question 2), the potential computational cost and scalability (Question 1), and the relatively limited scope evaluated still weigh significantly in my view. Therefore, although the additional experiments are informative, they do not sufficiently overcome the previously identified weaknesses to warrant an increase in the score at this time. My recommendation remains unchanged. --- Reply to Comment 1.1.1: Comment: Dear reviewer **6Myy**, we sincerely thank you for the deep understanding of our work throughout the review process. Your insights enabled to provide further analysis on important ablation studies and practical aspects of our method. Within the rebuttal period, we did our best attempting to address your questions on objective design and scalability with theoretical analysis and additional experiments: we try to further elaborate on these points. - The theoretical analysis to **pmFQ** suggests than intuitively, using more objectives leads to slower convergence because hard to distinguish Pareto optimal molecules. We support this with more evidence by showing below the size of the first six Pareto fronts on 10K samples (generated by CombiMOTS on the GSK3B-JNK3 task), as well as the number of Pareto fronts. |Pareto Rank/Objective Setting|Docking+Activity (4 Obj)|Activity+QED+SA (4 Obj)|All Six Objectives| |-|-|-|-| |Rank 1|60|69|729| |Rank 2|135|204|1416| |Rank 3|180|257|1809| |Rank 4|233|370|1771| |Rank 5|263|456|1456| |Rank 6|316|502|1153| |Number of fronts|40|29|12| We observe that as expected, using six objectives leads to fewer fronts which are individually more populated. - Docking score is critical in drug design because it directly evaluates ligand-target binding affinity, which is essential for therapeutic efficacy. Unlike QED/SA, which focus on general drug-likeness or synthetic feasibility, docking scores are structurally and energetically tied to the biological activity of the molecule. Studies have shown that incorporating docking scores during molecular generation leads to higher binding affinity and improved hit identification [1], whereas QED/SA cannot ensure target specificity [2, 3]. Therefore, docking score should be prioritized in early drug discovery stages, with QED and SA used later for optimization [4]. - This is supported by our additional experiment during rebuttal (comparing both settings). We observed that not optimizing docking scores leads to a significant decrease of quality, but not optimizing QED/SA still leads to acceptable scores, as well as better docking scores. The table above also supports this, as using docking scores over QED/SA interestingly allows to better differentiate Pareto fronts. These results are empirical but align perfectly with the necessity to consider both target affinity and molecular properties within the limitations of the objective settings for Pareto MCTS. Regarding the scope of our work we would like to remind that, though we agree the submitted manuscript did not discuss on other applications, we purposefully focused on the topic of dual-target molecules to fit within the format of a conference paper. The additional case studies conducted after you and reviewer's **KAF5** suggestions [Toxicity Optimization, Molecule Selectivity] exemplify how CombiMOTS could adapt to any multi-objective optimization setting. Besides the limits inherent to the rebuttal, we also deem that extensively elaborating such applications in a single paper could confuse readers. However, we agree that discussing our method's generalizability is important: we intend to integrate these results in future versions' appendices. We thank you again for the thoughtful comments and the time invested! --- Reference - [1] Agu et al., *Scientific Reports*, 2023 [2] Xu et al., *F1000Research*, 2024 [3] Chenthamarakshan et al., *Advances in Neural Information Processing Systems 33*, 2020 [4] Xue et al., *Bioinformatics*, 2025
null
null
null
null
null
null
Penalizing Infeasible Actions and Reward Scaling in Reinforcement Learning with Offline Data
Accept (spotlight poster)
Summary: The paper addresses Q-value extrapolation errors in offline reinforcement learning (RL). It identifies linear extrapolation beyond the data range as a key issue and proposes two methods to mitigate it: (1) reward scaling with layer normalization (RS-LN) and (2) penalizing infeasible actions (PA). These components are integrated into a new algorithm, PARS, which is evaluated on the offline RL benchmark, showing state-of-the-art performance in offline and online fine-tuning, especially in the challenging AntMaze Ultra task. Claims And Evidence: The paper claims that linear extrapolation in Q-functions leads to overestimation and that RS-LN and PA can effectively mitigate this. Empirical results on the D4RL benchmark support these claims, showing that PARS consistently outperforms baselines. However, the evidence could be strengthened by additional ablation studies isolating the effects of RS-LN and PA. Methods And Evaluation Criteria: The methodology is well-grounded, leveraging standard RL benchmarks and evaluation metrics. However, the choice of hyperparameters for PA and RS-LN could be better justified. Additionally, the comparison to other offline-to-online RL methods would benefit from a discussion on computational efficiency. Theoretical Claims: The theoretical analysis of Q-function extrapolation is sound, particularly the discussion of layer normalization’s role in bounding Q-values. Experimental Designs Or Analyses: The experiments are extensive and cover a broad range of RL tasks. But it would be beneficial to explore whether PARS maintains its advantages when applied to more complex tasks beyond D4RL. Supplementary Material: The supplementary material includes implementation details, additional experiments, and theoretical derivations. The inclusion of more implementation details, particularly regarding the tuning of PA parameters, would be beneficial. Relation To Broader Scientific Literature: The paper effectively situates its contributions within the literature on offline RL and critic regularization. Essential References Not Discussed: The paper does not discuss recent advancements in distributional RL, such as IQN or QR-DQN, which could offer alternative solutions to Q-value extrapolation. Additionally, references to prior works on offline-to-online RL transitions, such as model-based fine-tuning strategies, would be benificical. Other Strengths And Weaknesses: Strengths: * A well-motivated approach to mitigating Q-function extrapolation errors. * Strong empirical performance across diverse RL tasks. * Simple yet effective implementation with minimal computational overhead. Weaknesses: * Lack of ablation study of RS-LN and PA’s effectiveness. * Limited discussion on computational complexity. * No exploration of PARS’s applicability to other offline RL benchmarks or real-world tasks. Other Comments Or Suggestions: Clarify how the infeasible action penalty interacts with different action space dimensionalities. Questions For Authors: * How does the computational cost of PARS compare to other offline RL methods in terms of training time and memory usage? * Can RS-LN and PA be effectively combined with model-based offline RL approaches? * How does PARS perform on tasks with highly stochastic dynamics, where infeasible actions are less clearly defined? * Could the proposed approach be extended to address policy extrapolation errors, in addition to Q-function extrapolation? * How robust is PARS to changes in the reward scale, especially when dealing with environments that contain highly varying reward magnitudes or sparse returns? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the careful review of our work, the clarification of various components of PARS, and your thoughtful suggestions. ### [R1] Ablation study separating the effects of RS-LN and PA Beyond the ablation in Figure 9, we isolated PA in a separate experiment. As shown in **Fig B of the link (https://sites.google.com/view/pars-icml25).** - None (without LN or PA): Training becomes unstable as the reward scale increases. - PA: Helps mitigate extrapolation error, though its effect diminishes at larger reward scales. - LN: Performance improves with increasing reward scale. - LN and PA: Combining LN with PA further enhances performance by reinforcing downward Q extrapolation. &nbsp; ### [R2] Hyperparameter justification As the reviewer mentioned, $\alpha$ in Eq. (3), the infeasible action distance, and the reward scale are important factors in PARS. The infeasible action distance is analyzed in Figure 11, reward scale in Figure 9, and $\alpha$ in our response to Reviewer QMiG [R3]. &nbsp; ### [R3] Generalization beyond D4RL Thank you for the helpful suggestion. Additionally, **we evaluated PARS on four environments from the NeoRL-2 [Ref.4] benchmark, which includes more complex and realistic problems**, covering a variety of real-world challenges such as delay, external factors, limited data, and safety constraints. As shown in **Table D of the link (https://sites.google.com/view/pars-icml25)**, PARS surpasses previous SOTA even under more realistic benchmark. &nbsp; ### [R4] Action space dimensionality and PA behavior As shown in Eq.(1), an action is considered infeasible in dimension $i$ if it falls outside predefined thresholds. Thus, $\mathcal{A}_I$ is defined as the union of these infeasible regions across all dimensions. From this union, we sample the same number of infeasible actions as dataset actions per gradient update and assign them a penalty of $Q\_{\text{min}}$. &nbsp; ### [R5] Computational efficiency of PARS In Appendix H, we compared the computational cost of various offline RL algorithms, showing that PARS is more efficient in training time and GPU memory usage. &nbsp; ### [R6] Comparison with uncertainty-based approach Thanks for the insightful point. Both distributional and model-based RL incorporate forms of uncertainty—distributional RL models the distribution over Q-values, while model-based RL estimates uncertainty in learned environment dynamics. These uncertainties help with exploration and OOD detection, albeit often at the cost of added complexity and computation. PARS offers a simple and efficient solution built on TD3+BC, without complex uncertainty modeling, yet still achieves strong performance. We see promise in combining PARS with uncertainty-based methods. In offline RL, penalties may need to differ between interpolation and extrapolation regions, as the latter, lacking clear bounds, can lead to more severe value errors. Penalizing OOD while encouraging downward extrapolation could improve robustness. Similarly, uncertainty-guided exploration during fine-tuning, as reviewers suggested, may boost performance. We'll incorporate these insights in the revision. &nbsp; ### [R7] Applicability to stochastic environments Even in stochastic settings, actions are generally confined within a certain min and max range. Thus, the span defined by these bounds can be regarded as the feasible region. Moreover, instead of sampling infeasible actions that lie just outside the feasible action set, we sample from a sufficiently distant region, using a guard interval. This allows PARS to function without precise knowledge of the feasible-infeasible boundary, supporting robust performance under stochastic dynamics. &nbsp; ### [R8] Extension to policy extrapolation We're not entirely sure what the reviewer means by policy extrapolation, but we interpret it as the policy incorrectly predicting actions for given states. Since the policy is trained via the Q-function, reducing Q-function errors may help mitigate policy errors. However, as Q-function extrapolation and OOD action selection are distinct, directly applying PARS to improve policy accuracy is not straightforward. If we have misunderstood, we would be grateful if the reviewer could kindly clarify. &nbsp; ### [R9] Effectiveness of PARS under sparse or varying reward scales Among our evaluated domains, AntMaze meets the reviewer’s criteria with its sparse reward setting. As shown in Figure 9, performance improves with larger reward scales, highlighting a clear advantage over other baselines. We also analyzed the case where noise is added to the reward; for details, please refer to our response to reviewer snRz [R4]. &nbsp; [Ref.4] Gao, Songyi, et al. "NeoRL-2: Near Real-World Benchmarks for Offline Reinforcement Learning with Extended Realistic Scenarios." arXiv preprint (2025). &nbsp; Thank you once again for your thoughtful review. We hope that our responses have thoroughly addressed your concerns.
Summary: In this paper authors address Q-value overestimation problem in offline RL when in the presence of infeasible actions. Authors propose to use two diffferent strategies together, to scale reward and penalize infeasible actions. Layer normalization, as proposed in previous research, is also found to be important to use with reward scaling. ## update after rebuttal I appreciate authors rebuttal, but taking into account rebuttal and other reviewers opinions I have decided to raise my score from 3 to 4. Claims And Evidence: Claims are clear, but as the claim is to show that Q-values are not overestimated after the proposed tricks, I do not see any extensive numerical evaluations where we could see what happens to Q-values with and without the proposed tricks. Final benchmarking results are of course shown, but it would be good to tie these things together more strongly. Methods And Evaluation Criteria: Standard bencmarking datasets and environments are used and evaluation criteria as employed in those is applied. Only thing, as mentioned in the previous box is that the estimated Q-values specifically are not shown. Theoretical Claims: none Experimental Designs Or Analyses: Analyses provided in numerous Figures are instructive and well designed. Supplementary Material: I did glance, but did not thoroughly review any portion. Relation To Broader Scientific Literature: - Essential References Not Discussed: - Other Strengths And Weaknesses: Paper is interesting and well written. It proposes a novel but simple tricks to make offline RL work in OOD case. Empirical resutls are also very strong. I have two main questions: 1. infeasible set of actions as in Eqs. 1-2 work only for continuous actions. What happens with discrete actions? In more complex games number of discere actions can happen quite rarely. Leading to similar OOD case as in continuous actions. 2. \alpha paramerer in Eq. 2 seems to be an important one. But I do not see any experimental, nor theoretical analysis on how to set it. Other Comments Or Suggestions: - Many Figures in Section 6 do not have y-axis labels. Please mark them, sometimes it is not self evident whether it is a reward that is recorded. Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your positive feedback and your constructive suggestions that allowed us to strengthen areas we may have initially missed. ### [R1] Empirical evidence on Q-value overestimation reduction Thanks for the suggestion. Measuring extrapolation error in high-dimensional state-action spaces is challenging, as it requires sampling outside the dataset’s convex hull—a non-trivial task in such high-dimensional settings. Therefore, following the approach in [Ref.3], we visualized the Q-values learned by applying LN, RS-LN, and RS-LN&PA on the Inverted Double Pendulum with a 1D action space. The visualizations can be found at **link (https://sites.google.com/view/pars-icml25). As shown in Fig A of the link**, and consistent with the discussion in Figure 5 of the manuscript, we observe that training the Q-function without LN leads to divergence. Applying LN prevents this divergence. Furthermore, when reward scaling and PA are also applied, the extrapolation curve bends downward while maintaining a similar overall shape within in-sample actions. We also observe that with reward scaling and PA, the average magnitude of Q-values decreases, suggesting a general reduction in overestimation. &nbsp; ### [R2] Applicability to discrete action spaces In our work, we focused on the problem of Q-function extrapolation error in continuous action spaces. In discrete action spaces, the action set is finite, making the notion of being “outside” the feasible set less clear—and upward extrapolation of the Q-function less relevant. Since many practical tasks, such as robot control, involve continuous control, and many offline RL algorithms target this setting, we also concentrate on continuous control. We believe that achieving strong performance in this setting, compared to other offline RL algorithms that also focus on continuous control, is a substantial contribution. &nbsp; ### [R3] Sensitivity analysis of $\alpha$ hyperparameter As the reviewer pointed out, when applying PA, the hyperparameter $\alpha$ in eq.(3) is important. It should be set such that assigning a $Q_\text{min}$ penalty to infeasible actions does not interfere with the TD update for actions in the dataset. Accordingly, we set $\alpha$ to a small value in the range of 0.0001 to 0.001. We additionally conducted a sensitivity analysis on $\alpha$. As shown in **Table C of the link (https://sites.google.com/view/pars-icml25)**, PARS performs well when $\alpha$ is in the range of 0.001 to 0.1. On the other hand, if $\alpha$ is too small (e.g., 0.0001), the penalty does not take effect properly, and if it is too large, it starts to interfere with the TD update on the dataset, leading to a decrease in performance. &nbsp; ### [R4] Missing y-axis labels in figures Sorry for the confusion, and thank you for the helpful comment. We'll make sure to clearly specify the y-axis in the revised version. &nbsp; [Ref.3] Kim, et al. "Adaptive q-aid for conditional supervised learning in offline reinforcement learning." NeurIPS 2024. &nbsp; Once again, we sincerely thank you and hope that our responses have adequately addressed all of your concerns.
Summary: The paper introduces a method for applying an OOD penalty using reward scaling and LN. When combined with a modified TD error loss—incorporating a PA penalty—the proposed approach enhances TD3-BC’s performance, achieving strong results on benchmarks, particularly in maze tasks. I quite like the paper, as it provides a new perspective by investigating how Q-function evaluation interacts with reward tuning and network design. The insights are intriguing and convincing. However, I noticed that many interesting discussions are deferred to the appendix. I suggest moving some of these discussions—such as the reasoning behind using TD3-BC instead of IQL—into the main text for better accessibility. Claims And Evidence: Most of the claims are supported by explanations and toy tasks. However, I believe the paper would benefit from additional theoretical analysis. Currently, the entire work appears too heuristic, as it lacks theoretical proofs or formally defined mathematical assumptions. Some claims, such as the necessity of a guard interval, are explained but not theoretically justified. I would be eager to see more theoretical insights if the authors could elaborate on the following points: 1. Why do reward scaling, LN, and ReLU activation lead to good performance, while other combinations do not? Is there any theoretical justification for this choice? 2. Why is a guard interval necessary, and how should its size be determined? 3. With the modified TD loss and the designed $Q_{\min}$, does this modification influence the convergence behavior of Q-value estimation? Methods And Evaluation Criteria: The paper evaluates its method on well-accepted benchmarks. Theoretical Claims: No theoretical results are provided in this paper. Experimental Designs Or Analyses: I did not find any explicit issues with the experiments or toy examples. Supplementary Material: I reviewed all parts of the appendix and found that it contains several valuable discussions. Relation To Broader Scientific Literature: None. Essential References Not Discussed: None. Other Strengths And Weaknesses: The paper is novel and easy to follow. The visualizations are well-designed and provide valuable support in understanding the concepts presented. The analysis of interactions between reward scaling, LN, and ReLU offers an intriguing and fresh perspective. For weaknesses, please refer to my previous section's content and the questions outlined below. Other Comments Or Suggestions: None. Questions For Authors: 1. The definition of OOD relies on the convex hull, but why is it defined this way? I assume the definition is inspired by action interpolation, but is this approach too restrictive? Would it be possible to consider a more flexible, nonlinear boundary for defining OOD actions? 2. I have some concerns regarding reward scaling. In MuJoCo, the rewards are well-designed and provide clean signals. However, in most practical cases, rewards tend to be noisy, and scaling them up to a large magnitude (e.g., $10^4$) could induce **high variance**, potentially leading to training instability or even collapse. Do the authors have any insights on this issue? 3. I find that Figure 1 may not be very informative. Could the authors consider replacing it or, alternatively, moving some discussions from the appendix into the main text? For instance, the discussion on IQL was particularly interesting to me. 4. Q-ensemble is commonly used to mitigate Q overestimation, but is it redundant when using the PA penalty? While I see that the ensemble improves performance, does this imply that the PA penalty does not fully mitigate the overestimation issue? Do the authors have any insights into this? 5. The success of PARS seems to depend on choosing TD3-BC, as it still allows sampling of OOD actions. However, if the policy has high expressive power and can accurately capture all in-sample actions—such as in the case of diffusion policies—would PARS still be effective? If not, does this mean that PARS is mainly beneficial for "less expressive" policies and may not help much for "highly expressive" policies? I am happy to raise my score if the authors can address my concerns. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the positive feedback on our work and the opportunity to clarify any remaining uncertainties. ### [R1] Theoretical justification of PARS Thank you for the suggestions. As theoretical analysis of deep neural networks with nonlinearities is highly challenging, we empirically validated the effectiveness of RS-LN and leave theoretical aspects for future work. We will clarify this point in the limitations. As noted in Appendix B, we used ReLU due to its popularity and its frequent use in prior works, and exploring other activations better suited to RS-LN, especially with PA, is a promising future direction. We can show the convergence of PARS, and we will add this to the paper. For this we define the following sets: - $\mathcal{ID} = \{(s, a) \mid \beta(a \mid s) > 0\}$ - $\mathrm{OOD}_{\text{in}} = \mathrm{ConvexHull}(\mathcal{ID}) \setminus \mathcal{ID}$ - $\mathrm{OOD}_{\text{out}} = \left( \mathrm{ConvexHull}(\mathcal{ID}) \right)^c$ We define the operator $T^\pi_{\mathrm{pars}}$ as: $ T^\pi\_{\mathrm{pars}} Q(s,a) = \begin{cases} T^\pi Q(s,a), & \text{if } (s,a) \in \mathcal{ID}, \\\\ \mathbb{E}\_{(s',a') \in \mathrm{kNN}(s,a; \mathcal{ID})}[T^\pi Q(s',a')], & \text{if } (s,a) \in \mathrm{OOD}\_{\text{in}}, \\\\ Q\_{\min}, & \text{if } (s,a) \in \mathrm{OOD}_{\text{out}}, \end{cases} $ where $\mathrm{kNN}(s,a; \mathcal{ID})$ denotes the set of the k-nearest neighbors within $\mathcal{ID}$. We show that $T^\pi_{\mathrm{pars}}$ is a contraction under the$\| \cdot \|_\infty$ norm: - Case 1:$ \left|T^\pi_{\mathrm{pars}} Q_1 - T^\pi_{\mathrm{pars}} Q_2\right| = \left|T^\pi Q_1 - T^\pi Q_2\right| \le \gamma \| Q_1 - Q_2 \|_\infty$ - Case 2:$\bigl|T^\pi\_{\mathrm{pars}} Q_1 - T^\pi_{\mathrm{pars}} Q_2\bigr| = \left| \mathbb{E}\_{\mathrm{kNN}}[T^\pi Q_1] - \mathbb{E}\_{\mathrm{kNN}}[T^\pi Q_2] \right| \le \gamma | Q_1 - Q_2 |_\infty$ - Case 3:$\left| Q_{\min} - Q_{\min} \right| = 0 \le \gamma \| Q_1 - Q_2 \|_\infty$ Thus, $T^\pi\_{\mathrm{pars}}$ is a contraction, and the Q-function converges accordingly. In Case 2, computing $\mathrm{kNN}(s,a; \mathcal{ID})$ is costly. Since the Q-function is approximated by a neural network, we let it implicitly learn this behavior instead of adding it to the loss. For the same reason, assigning $Q_{\min}$ can unintentionally affect nearby values. To prevent this, we introduce a **“guard interval”** that excludes actions just outside $\mathcal{A}_F$ from being evaluated as $Q\_{\min}$. In practice, a guard interval of about 1000 (see Figure 11) is sufficient to avoid influencing Q-updates. &nbsp; ### [R3] Regarding convex hull definition PARS aims to be a simple, efficient algorithm. While nonlinear boundaries are certainly possible, we opt for a convex hull to maintain simplicity. &nbsp; ### [R4] Effectiveness of reward scaling in noisy environments Assuming the reward is added by noise, scaling the noisy reward increases both true reward and noise proportionally, leaving the signal-to-noise ratio unchanged. To investigate further, we referred to [Ref.2], which analyzes offline RL under reward noise. Following this, we examined how PARS performs in such noisy reward settings. As shown in **Table A of the link** (https://sites.google.com/view/pars-icml25), in the presence of reward noise **the trend of improved performance with increased reward scale still holds. So the core idea of PARS remains valid.** &nbsp; ### [R5] Regarding presentation In the revision, we will move Appendix D to the main body, possibly by shortening Fig. 1 and others. &nbsp; ### [R6] Role of Q-ensemble under the presence of PA penalty As noted in Appendix B, PARS primarily targets extrapolation errors outside the convex hull and does not explicitly address OOD actions within it. These actions may still face approximation errors, which could be mitigated through ensemble-based uncertainty estimation. However, as shown in **Table B (https://sites.google.com/view/pars-icml25)**, PARS outperforms SOTA in AntMaze even without an ensemble. &nbsp; ### [R7] Dependency on policy expressiveness As the reviewer noted, policy expressiveness is important, and methods like Diffusion-QL use diffusion models to enhance it. But, despite using diffusion, Diffusion-QL underperforms compared to PARS and suffers from the high inference cost of diffusion models. Since policies are derived from the Q-function, we argue that improving Q-network expressiveness is even more crucial. Our work focuses on this, and notably, PARS achieves strong performance with just a simple MLP policy. Ultimately, we believe that combining efforts to improve both policy and Q-network expressiveness is key to building better offline RL algorithms. &nbsp; [Ref.2] Yang, Rui, et al. "Towards Robust Offline Reinforcement Learning under Diverse Data Corruption." ICLR 2024. &nbsp; Thank you again for the careful review. We hope that our responses have sufficiently addressed the concerns. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' detailed response. Most of my concerns have been addressed. However, I still have one remaining question regarding policy expressiveness. From your analysis with IQL, it seems that your method may contradict the use of highly expressive policies. If the policy is already expressive, it is less likely to encounter OOD actions. However, your method appears to benefit significantly from the presence of OOD actions — suggesting that the ability to correct or account for these OOD actions is key to its strong performance. Does this imply that the advantages of your method diminish when using highly expressive policies? In other words, would we see less improvement if the base policy already performs well? --- Reply to Comment 1.1.1: Comment: Thanks for the further response. &nbsp; Note that high policy expressiveness means the policy's capability to express something complicated more accurately with high fidelity. Consider the following two major policy extraction methods in offline RL: (1) Weighted behavioral cloning (IQL) - $\max_{\pi}\mathbb{E}\_{s,a \sim D} \left[ e^{\alpha (Q(s,a) - V(s))} \log \pi(a \mid s) \right], ~~~eq.*$ (2) Behavior-constrained policy gradient (TD3+BC, diffusion-QL) - $\max_{\pi} \mathbb{E}\_{s \sim \mathcal{D},\ a' \sim \pi(\cdot|s)} \left[ Q(s, a') \right] - w \cdot \mathbb{E}\_{s \sim \mathcal{D}} \left[ \mathcal{R}(\pi(\cdot|s), \pi_\beta(\cdot|s)) \right],~~~ eq.**$ where $\mathbb{E}_{s \sim \mathcal{D}} \left[ \mathcal{R}(\pi(\cdot|s), \pi\beta(\cdot|s)) \right]$ is a behavior regularization term. So, a policy with higher expressiveness can make the above objective larger by better fitting the policy $\pi(a|s)$ to the desired target function, e.g., $\exp( \alpha (Q(s,\cdot)-V(s))$ in eq.* in (1). &nbsp; Now consider the example of Fig C (a) at the following anonymous link (https://sites.google.com/view/pars-icml25-2) and consider the full in-sample policy learning using eq.* of IQL for this example . As shown in the figure, there are two non-contiguous in-sample regions, with an intermediate $\mathcal{A}\_{OOD-in}$ region. Assuming a perfectly learned Q-function and a highly expressive policy $\pi_\theta$ initialized as $\pi_\theta(a|s) \approx 0,~\forall a$, with random zero-mean weght $\theta$, solving eq.* using only in-sample data will lead to the learned policy shown in Fig C (b) of the same link. In this case, the policy accurately follows the Q-function within the in-sample regions but assigns near-zero probability elsewhere due to its high capacity and zero initialization. This policy will not generate OOD actions as the reviewer mentioned. But, note that the optimal action for the true Q in Fig C (a) is not the best in-sample action produced by this policy. Therefore, prior work ([Ref. 5]) highlights that **allowing a certain degree of deviation from the in-sample region without straying too far, and broadening the range of considered actions can improve the use of the Q-function over a wider coverage and enhance overall performance when the Q-function is well trained. In addition, when considering online finetuning, assigning near-zero probability to all OOD actions can severely limit online exploration, restricting performance improvements.** Note that in the above case of in-sample Q learning, Q function estimation with a more expressive Q function approximator may not increase the performance significantly, as the reviewer noted, because the action is confined to the in-sample region. But, even in this case, there was a nontrivial performance increase of IQL with RS-LN applied to IQL Q-function learning, as shown in Table 5 in Appendix D, repeated in Table E in the link (https://sites.google.com/view/pars-icml25-2). This may be because a more expressive Q-function approximator provides better estimates even for in-sample Q-values, as hinted by the fact that adding PA on top of RS-LN does not improve performance and IQL policy mostly samples in-sample actions. &nbsp; To overcome the limitation of in-sample policy learning of IQL, we need to go beyond in-sample policy learning. One way is to consider the policy learning, eq. ** in (2). e.g., TD3-BC in which $\mathcal{R}(\pi(\cdot|s), \pi_\beta(\cdot|s))= \left\| \pi(s) - a \right\|^2$. Here, the policy is allowed to generate OOD action $a'$ from $s'$ as shown in eq. ** but not too far from in-sample actions. Then, the Q-values of OOD actions, especially those in $\mathcal{A}_{OOD-out}$, matter now, and suppressing the upward trend in $\mathcal{A}\_{OOD-out}$ becomes crucial. Here, RS-LN and PA play an important role, creating strong synergy with (2), which leads to strong performance in both offline training and online finetuning. &nbsp; **In summary, applying RS-LN to Q function approximator yields performance gain both in IQL and TD3-BC, but the major influencing mechanisms are different (i.e., more accurate in-sample domain Q learning for IQL and more accurate in-sample domain Q learning + slight increase of action range + suppression of upward trend at $\mathcal{A}_{OOD-out}$ for TD3-BC), and the gain is more significant for TD3-BC.** &nbsp; Another point is that applying eq.* in (1) requires direct computation of $\log \pi(a \mid s)$. However, diffusion models do not provide an explicit probability distribution. Thus, it is not easy to use diffusion models for the in-sample policy learning, eq.* in (1). Indeed, Diffusion-QL [Ref.6], uses eq.** in (2). &nbsp; Thank you. &nbsp; [Ref.5] S. Park et al. "Is Value Learning Really the Main Bottleneck in Offline RL?." NeurIPS 2024 [Ref.6] Z. Wang, J. Hunt, and M. Zhou. "Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning." ICLR 2023
Summary: The manuscript introduces penalizing infeasible actions and reward scaling (PARS), a method for discouraging value overestimation caused by extrapolation in offline reinforcement learning (RL). The proposed method uses reward scaling and layer normalization, which are shown to work together to increase the feature resolution. This has the effect of reducing the similarity between gradient updates corresponding to actions found in the dataset and actions that lie outside the convex hull of “feasible actions”. In addition, the authors also propose to penalize out-of-distribution actions that lie far outside the region of feasible actions. Extensive benchmarking experiments demonstrate that the method consistently achieves superior performance compared to many baselines. Analytical experiments provide additional insights on the contribution of each component and the effects of hyperparameters. ## update after rebuttal The authors' detailed rebuttal provided important clarifications and additional experimental results which support the efficacy of the proposed method. I am now more confident in recommending the paper for acceptance and have accordingly raised my score to a 4. Claims And Evidence: * PARS successfully addresses the value overestimation problem caused by extrapolation in offline RL. This is well supported by better performance in experiments on many environments, including the challenging AntMaze Ultra and Adroit relocate-cloned * Penalizing infeasible actions during online fine-tuning is effective for stabilizing training. Shown in experiments on two difficult environments (Figure 10) Methods And Evaluation Criteria: The proposed method is mostly well-motivated. The benchmarks show clear performance improvements over competing methods. The authors also present several ablation studies and analytical experiments investigating the effect of hyperparameters such as the reward scale or the distance of infeasible actions from the feasible action region. Theoretical Claims: N/A Experimental Designs Or Analyses: My main concern is whether the comparison with other algorithms is fair in the sense that several hyperparameters of PARS were tuned per environment. Is that the case for every score that you report from other papers and did you have the same search budget for hyperparameters of other methods that you ran yourself? Supplementary Material: None provided Relation To Broader Scientific Literature: The proposed method is novel and addresses the important problem of value overestimation due to extrapolation error in offline RL that has been studied before. It is the first to make a distinction of OOD actions that are within the convex hull of the actions in the dataset and those outside. Discouraging high Q-values for actions outside this region of “feasible” actions seems to be very effective at mitigating extrapolation errors. Essential References Not Discussed: I think the following work is related: * Gulcehre, C., Colmenarejo, S. G., Wang, Z., Sygnowski, J., Paine, T., Zolna, K., ... & de Freitas, N. (2021). Regularized behavior value estimation. arXiv preprint arXiv:2103.09575. Other Strengths And Weaknesses: Strength: Extensive analytical experiments that show efficacy of each contribution or support design choices such as the distance of sampled infeasible actions from the feasible action region (Figure 11) Weakness: I think the motivation for penalizing infeasible actions is not very strong. I understand that it probably helps to intentionally push down the Q-values at far regions of the action space, but I’d like to know your thoughts on why it improves the performance compared to only using reward scaling + layer normalization. Other Comments Or Suggestions: * I think it’d be worthwhile looking at how PARS affects policy churn (Schaul et al., 2022) * Can you comment on whether PARS has the potential to improve safe RL in the sense that it reduces sampling of actions that are far from the explored action space? Schaul, T., Barreto, A., Quan, J., & Ostrovski, G. (2022). The phenomenon of policy churn. Advances in Neural Information Processing Systems, 35, 2537-2549. Questions For Authors: * Is layer norm only applied to the pen-ultimate layer, i.e. the one before the output weights, or to every layer? * In Figure 10 (left), the LN online fine-tuning seems to reduce performance. Why is that? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your positive evaluation of our contribution and suggestions regarding various prior studies and potential directions for extending PARS. ### [R1] Comparison of tuning search budget for hyperparameters with baselines As noted in our manuscript, we tuned around 8 hyperparameter configurations, varying $\alpha$, TD3+BC $\beta$, and reward scale. For context, prior offline RL works we compare against also performed environment-specific tuning: ReBRAC used 25 configurations for MuJoCo and Adroit and 192 configurations (grid over 4 actor betas, 4 critic betas, 4 actor learning rates, and 3 critic learning rates) for AntMaze, SAC-RND used 19 (MuJoCo) and 9 (AntMaze), and SPOT, MSG, and MCQ used 6, 12, and 7, respectively. SAC-N and EDAC used 7 and 12. As shown above, most of the prior works **we compare against adopt a similar level (some even more extensive) of hyperparameter tuning, which supports the fairness of our comparisons.** For TD3+BC, IQL, and CQL, we report results from their original papers (1–2 configurations), and ReBRAC [Ref.1] provides extensively tuned results for them—yet PARS still performs strongly. Importantly, as shown in Appendix G, PARS achieves strong AntMaze performance even with a **single** hyperparameter configuration, outperforming other baselines. &nbsp; ### [R2] Motivation for PA As discussed in Section 3.2, our goal is to encourage the learned Q-function to exhibit downward extrapolation for OOD actions that lie outside the convex hull of the dataset. To achieve this, RS-LN first increases the expressivity of the neural network, which naturally enables downward extrapolation by preventing gradient updates that would otherwise increase the Q-value positively for OOD actions. In addition, PA makes this process more explicit by penalizing infeasible actions, thereby enforcing downward extrapolation. As demonstrated in the ablation study in Figure 9, RS-LN has a significant effect on its own, but applying PA on top further enforces downward extrapolation, contributing stability and performance of offline RL training. &nbsp; ### [R3] Relation to prior work Thank you for pointing out the relevant prior work. We will include the study in the revision. In particular, R-BVE also aims to mitigate Q-function extrapolation errors using SARSA-based in-sample Q-learning and ranking loss, performing policy improvement only once at the end. However, this single-step approach may achieve insufficient performance improvement in complex environments or where rewards are sparse. While the ranking loss aligns Q-values based on high-reward trajectories, it risks performance degradation when applied to low-reward data, requiring additional mechanisms like soft filtering that increase implementation complexity. In contrast, PARS directly addresses Q-value extrapolation, especially outside the convex hull of the data, by combining RS-LN and PA. This provides a stable foundation for repeated policy improvement and achieves both simplicity and superior performance. &nbsp; ### [R4] Regarding policy churn Thanks for the insightful suggestion. The referenced paper studies policy churn—sudden shifts common in value-based methods like DQN. In contrast, PARS builds on TD3+BC with a behavior cloning term in the actor, which helps suppress such changes. While this likely reduces policy churn, continual critic updates may still cause some churn, potentially aiding exploration. Quantitatively analyzing churn in critic-regularized methods like PARS would be a valuable direction for future work. &nbsp; ### [R5] Implications for safe RL In our study, we define infeasible actions as those lying outside the set of feasible actions. However, this definition can be extended by incorporating various constraints depending on the objectives of different tasks. In particular, by establishing clear criteria for unsafe infeasible actions and designing the system to penalize such actions accordingly, the framework can be naturally extended to safe RL. &nbsp; ### [R6] Where layer normalization is applied We applied LN after all linear layers, including the input layer, except for the final output layer. &nbsp; ### [R7] Regarding figure 10 (left) As the reviewer mentioned in another comment, a key issue during online fine-tuning is that changes in the distribution can lead to unstable learning. Since PA was applied during offline training, we observed that removing PA during online fine-tuning led to instability. On the other hand, maintaining PA throughout resulted in more stable learning and led to significant performance improvements. &nbsp; [Ref.1] Tarasov, Denis, et al. "Revisiting the minimalist approach to offline reinforcement learning." NeurIPS 2023. &nbsp; We sincerely appreciate your thoughtful review once again. We hope that our responses have thoroughly addressed all of your comments. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response including clarifications regarding hyperparameter tuning for baselines, the relation to Gulcehre et al., and the placement of layer norm modules. I have read the detailed rebuttals for the other reviews of this manuscript, which also include additional analytical experiments that support the efficacy of the proposed method. I will increase the score to a 4. Minor feedback: Regarding R7, I think it'd make sense to just add a very short note indicating that the fine-tuning experiments are *after pre-training with PA*, even though that appears somewhat obvious in hindsight. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for continuously engaging with our response and for adjusting the recommendation. Your valuable suggestions and comments have helped improve our work. We will also incorporate the additional comment regarding R7 into the revision.
null
null
null
null
null
null
Grokking Beyond the Euclidean Norm of Model Parameters
Accept (poster)
Summary: The paper studies grokking i.e., delayed generalization analytically in sparse recovery and matrix factorization settings. The main finding i in the paper provides settings that question the accepted wisdom in grokking - a small low $L_{2}$ regularization (weight decay) is necessary for grokking and generalization. In other words, $L_{2}$ norm cannot be used as an indicator of delayed generalization. The paper provides extensive analysis and some empirical results to support the main claim ## update after rebuttal My current rating is 3. I like the paper but am unwilling to move the score up after reading the rebuttal and other reviews. I have started a discussion with the AC and other reviewers to obtain clarifications but yet to hear from them. Score is unchanged but I have already marked it as a 3. Claims And Evidence: The main claim is that if a problem as a certain property $P$, then appropriate regularization $L_{p}$ is required to reach the optimal solution. - Analytical results appear to support the claim - Experiments on synthetic data supports this claim as well in Section 2 and Section 3 - Non-linear teacher student model also appears to support the claim The support provided with PINNs and classification are not as clear as the above primarily because the nature of the optimal solution is not obvious (at least to this reader) Methods And Evaluation Criteria: The paper is theoretical work, so the datasets used with nonlinear models are acceptable Theoretical Claims: The paper provides a theorem in Section 2 and one in Section 3 for sparse recovery and matrix factorization respectively. The theorems actually summarize several theorems proposed and proved in the appendix. The issue is its easy for the reader to get lost between the claims in Theorem 2.1 (and 3.1) as the text indicates they summarize several claims and proofs in the appendix. It would be better if the Theorem in the main text has a proof sketch in the main paper so that the interested reader can fill in the details from proof(s) in the appendix. Experimental Designs Or Analyses: The paper is theoretical work, so the experimental implementation is acceptable Supplementary Material: I read some proofs in the appendix and skimmed some empirical results detailed in the appendix. The paper is very technical with an appendix 70+ pages long. Relation To Broader Scientific Literature: The paper does an excellent job relating their work to broader scientific literature Essential References Not Discussed: None Other Strengths And Weaknesses: # Strengths - The paper uses a setup that's simple enough for analysis and is able to show that accepted wisdom in grokking, i.e., L_2 norm of parameters may not suffice. Note that this reminds me of a string of papers that questions "deep learning may not be explained by norms". Happy to see these papers cited in the work as well # Weaknesses - The paper is very dense and some of the sections, if rewritten, would be accessible to a broader audience Other Comments Or Suggestions: Grokking is commonly thought of as delayed generalization. Typically papers include a "training" vs "validation" plot to show delayed generalization. Could the authors simplify Figure 1 and perhaps Figure 3 to relate these terms to the terms plotted in the figure? Could the authors simplify Figure 2 and Figure 4 as there are too many terms plotted in a single plot? Questions For Authors: Please see my comments on including proof sketches for main theorems as well as improving figures in the paper made above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their time and thoughtful feedback. We sincerely appreciate the effort put into evaluating our work and offering constructive suggestions. Below, we respond to each of the points raised. **Theoretical Claims** We've rewritten the proofs of the two main theorems (2.1 and 3.1) more simply, this time with a proof sketch, which we can summarize as follows. The idea is to first show that, given a matrix $\tilde{\mathbf{X}} \in \mathbb{R}^{N \times n}$ and a vector $\mathbf{b}^* \in \mathbb{R}^{n}$ of sparsity $s$ ($\|\mathbf{b}^*\|_0 \le s$), we can write, for any vector $\mathbf{b} \in \mathbb{R}^{n}$, $ \|\mathbf{b} - \mathbf{b}^*\|_2 \le C_1 \| \tilde{\mathbf{X}} \left(\mathbf{b} - \mathbf{b}^*\right)\|_2 + C_2 | \|\mathbf{b}\|_1 - \|\mathbf{b}^*\|_1|$ where $C_1$ and $C_2$ are constants that depend only on $s$ and $\tilde{\mathbf{X}}$ (notably on the restricted isometry constant of $\tilde{\mathbf{X}}$). With this decomposition in mind, we then show that : * there is a memorization phase, after which the term $\| \tilde{\mathbf{X}} \left(\mathbf{b}^{(t)} - \mathbf{b}^*\right)\|_2$ vanish (or becomes proportional to the noise if there is any), since we have $ \tilde{\mathbf{X}} \mathbf{b}^{(t)} \approx \mathbf{y}^* = \tilde{\mathbf{X}} \mathbf{b}^* + \xi$. * and a second phase, during which $\|\mathbf{b}^{(t)}\|_1$ converges to $\|\mathbf{b}^*\|_1$: we show that this phase takes a time inversely proportional to $\alpha$ (the learning step) and $\beta_1$ (the regularization strength). We arrive at the final result by combining these two steps. The same type of decomposition applies to matrix factorization, with a slight difficulty because not only singular values but also singular vectors are taken into account. **Weaknesses: Rewriting** The rewriting of the proofs allows us to considerably reduce the number of pages in the appendix and to move many results from the appendix to the main text. **Validation plot** In our case, for an iterate $\mathbf{b}$, the training error is $\|\tilde{\mathbf{X}} \mathbf{b} - \mathbf{y}^*\|_2^2$, and the test (or validation) error is $\|\mathbf{b} - \mathbf{b}^*\|_2^2$. Normally, the generalization error should be $\mathcal{E}(\mathbf{b}) = \mathbb{E}\left(\mathbf{x}^\top \mathbf{b} - \mathbf{y}^*(\mathbf{x})\right)^2>$ (expectation with respect to $\mathbf{x}$ and $\xi$), where $\mathbf{y}^*(\mathbf{x}) = \mathbf{x}^\top \mathbf{b}^* + \xi$. Assuming $\mathbb{E}\xi = 0$, and using $\Sigma = \mathbb{E}[\mathbf{x} \mathbf{x}^\top]$, we get $\mathcal{E}(\mathbf{b}) = \left(\mathbf{b} - \mathbf{b}^* \right)^\top \Sigma \left(\mathbf{b} - \mathbf{b}^* \right) + \mathbb{E}\xi^2$. As we illustrate in the appendix, random matrices, notably Gaussian and Bernoulli with independent entries, allow for the lowest restricted isometry constant and, thus, better recovery of sparse vectors. Under this iid assumption, we have $\Sigma = \sigma^2\mathbf{I}_n$ for a certain $\sigma > 0$, which implies $\mathcal{E}(\mathbf{b}) = \sigma^2 \|\mathbf{b} - \mathbf{b}^*\|_2^2 + \mathbb{E}\xi^2$. So $\|\mathbf{b} - \mathbf{b}^*\|_2^2$ captures well the notion of test (or validation) error commonly used in the context of grokking, while $\mathbb{E}\xi^2$ capture the irreducible part of that error (we exclude it in the figure for simplicity). **Could the authors simplify Figure 1 and perhaps Figure 3 to relate these terms to the terms plotted in the figure?** Thanks for the suggestion, we will simplify the figures as best we can. --- Rebuttal Comment 1.1: Comment: Thank you. If I understand your rebuttal, (1) the proof-sketch shows the two phase nature of error reduction for sparse recovery. (2) Validation plot -> shows the terms for train and test error. I agree with these definitions. My last point was to gently push the authors to make the paper accessible to more folks but my inputs are optional. In general, I'd like the authors to do what they believe is correct and treat my suggestions as being offered in the spirit of being helpful.
Summary: This paper studies grokking phenomenons in the setting where the model has a certain special property $P$, and reveals that by using GD with a small but non-zero regularization of $P$ it is possible to observe grokking. In addition, it shows that modifying model depth or performing data selection can also amplify grokking. ### Update after rebuttal My original concern still remains while the authors did not provide further reply. Hence I keep my score unchanged. Claims And Evidence: Claims are supported by both theoretical and empirical evidence. Methods And Evaluation Criteria: Not applicable, as this paper does not propose new methods. Theoretical Claims: I checked the proofs for Section 2 roughly, which appear to be correct. Experimental Designs Or Analyses: The experimental designs are sound. Supplementary Material: This paper does not have supplementary material. Relation To Broader Scientific Literature: The contributions are an addition to the theoretical understanding of the grokking phenomenon. Essential References Not Discussed: The discussion of the essential references are sufficient. Other Strengths And Weaknesses: ### Strengths 1. The notations and introduction to the related work (e.g., those about the backgrounds of the problem setting) are satisfying. 2. Overall the writing is clear. And the proofs are clear and complete. 3. The findings for the grokking phenomenons in these new settings are also interesting. ### Weaknesses 1. This paper gives me a very mixed feeling: I can see that the authors have made significant efforts to try to derive a general result to understand the grokking phenomenon for a certain kind of problem (GD with small regularization), while the authors decided to have two separate but very repetitive sections: the results for Theorem 2.1 and Theorem 3.1 are in fact very similar even though they are studying two different problems. This strongly suggests that an underlying mechanism is behind somewhere: it seems that the grokking phenomenons in these two settings are both caused by the different generalization properties (implicit bias) of the early solution and that of the late solution. This is basically the idea proposed by Lyu et al., 2023, although Lyu et al., 2023 focused on classification problem. 2. In my view, the formulations of the regularization are nothing but the types of tendency towards specific late solution, especially when you are using small regularization coefficient, thus can be covered by the core idea of Lyu et al., 2023. In this way, I think the authors should sharpen their message from a higher point of view, clearly summarize their insights compared to those in Lyu et al., 2023, discuss their additional insights at the start of the main result, and reduce the repetitive discussion for the sparse recovery problem and matrix factorization problem. Reference Lyu et al. Dichotomy of early and late phase implicit biases can provably induce grokking. Other Comments Or Suggestions: Please see Weaknesses. Questions For Authors: What are the difference between the core idea in this paper and that of Lyu et al., 2023? By saying core idea, I mean the grokking phenomenon is formed by the transition from the early solution with poor generalization to the late solution solution that generalizes better. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback and the time spent reviewing our work. Our responses to the comments are provided below. **Weaknesses** 1) We have rewritten both sections to emphasize the fundamental difference between the two main theorems (2.1 about sparse recovery and 3.1 about matrix factorization) with proof sketches. We first show (using standard results from compressed sensing literature) that, given a matrix $\tilde{\mathbf{X}} \in \mathbb{R}^{N \times n}$ and a vector $\mathbf{b}^* \in \mathbb{R}^{n}$ of sparsity $s$ ($\|\mathbf{b}^*\|_0 \le s$), we can write, for any vector $\mathbf{b} \in \mathbb{R}^{n}$, $\|\mathbf{b} - \mathbf{b}^*\|_2 \le C_1 \| \tilde{\mathbf{X}} \left(\mathbf{b} - \mathbf{b}^*\right)\|_2 + C_2 | \|\mathbf{b}\|_1 - \|\mathbf{b}^*\|_1|$ where $C_1$ and $C_2$ are constants that depend only on $s$ and $\tilde{\mathbf{X}}$. After a short amortization phase, the term $\| \tilde{\mathbf{X}} \left(\mathbf{b}^{(t)} - \mathbf{b}^*\right)\|_2$ vanish (or becomes proportional to the noise if there is any), since we have $ \tilde{\mathbf{X}} \mathbf{b}^{(t)} \approx \mathbf{y}^* = \tilde{\mathbf{X}} \mathbf{b}^* + \xi$. Then comes a second phase, during which $\|\mathbf{b}^{(t)}\|_1$ converges to $\|\mathbf{b}^*\|_1$: our contribution is to show that this phase takes a time inversely proportional to $\alpha$ (the learning step) and $\beta_1$ (the regularization strength). The same type of decomposition applies to matrix factorization, with a slight difficulty because not only singular values but also singular vectors are taken into account. In fact, although the two results (sparse recovery and matrix factorization) are similar, the distinction lies at several levels: * in sparse recovery problems, we take into account only the iterate $\mathbf{b}^{(t)}$ (especially its $\ell_1$ norm), whereas, in matrix factorization, we take into account not only the singular value matrix $\Sigma^{(t)}$ (especially its $\ell_1$ norm) of the iterate $\mathbf{A}^{(t)} = \mathbf{U}^{(t)} \Sigma^{(t)} \mathbf{V}^{(t)\top}$, but also its singular vectors $\mathbf{U}^{(t)}$ (left) and $\mathbf{V}^{(t)}$ (right), which makes proofs in the second case slightly more difficult. * if we take a matrix factorization problem and just optimize it with $\ell_1$, there's no grokking unless the matrix is extremely sparse so that the notion of sparsity prevails over the notion of rank, which shows the point of studying $\ell_*$ (nuclear norm regularization) separately. * models (linear or not) trained with $\ell_2$, $\ell_1$ and $\ell_*$ have very different properties: the aim of our work is to show that in all these cases, if we want to have a model that generalizes, and which has the desired property (weights of low $\ell_2$ norm, sparse, or of low rank), then it's better to use a property-specific, weak regularization, and to train the model long enough. This extends Luy et al.'s work, which focuses solely on $\ell_2$. Indeed, there will be no grokking by using only $\ell_2$ in the problems we study theoretically in the paper. This also shows that the mechanism behind grokking goes beyond the simple $\ell_2$ norm (we show that with other regularization, we can change the delay between memorization and generalization). 2) Our results cannot be obtained directly from Luy et al.'s work. They show that large-scale initialization and non-zero weight decay lead to grokking provided the model is homogenous. However, as we show, using only $\ell_2$ (with large-scale initialization or not) in the problems we study theoretically in the paper, there will be no grokking. With large-scale initialization and $\ell_2$ only, there is an abrupt transition in the generalization error during training, driven by changes in the $\ell_2$-norm of the model parameters. This transition, however, does not result in convergence to an optimal solution. We call this phenomenon "grokking without understanding," and we attribute it to the fact that the assumptions underlying the theoretical predictions of Luy et al. are violated in our setting. **Questions: Difference between the core idea of the paper and that of Lyu et al. (2023)** Luy et al.'s focus on $\ell_2$ and large initialization. But in our case, there will be no grokking by using only $\ell_2$ in the problems we study theoretically. So, we extend their work to other regularizers, and there is no need for large-scale initialization in our setting. In addition, we show how grokking can be necessary in specific contexts. Suppose we want a model with low-rank weights and opt to achieve that through $\ell_*$ regularization. In that case, our work suggests using the lowest possible regularization coefficient (the one the computational resources allow) and training the model as long as possible, far beyond the overfitting point (grokking). We also show that grokking can be extremely amplified/reduced by selecting the data appropriately. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response. First of all, I would like to point out that the cited work should be "Lyu et al.", rather than "Luy et al.". Based on the rebuttal, I think that the authors did not fully understand my main question, and I must make it clear that I did not claim that your results can be obtained directly from Lyu et al, 2022. My point lies in that whether the grokking phenomenon manifests as a dichotomy of early implicit bias and later implicit bias of the learning dynamics, as shown by Lyu et al., no matter whether the type of the implicit bias is $\ell_2$ or not. If this is the case, then the authors only need to first present this general result and then discuss the application of this general for sparse recovery and matrix factorization by revealing their early and implicit bias, rather than presenting two separate sections as if there is no inherent connection between these two settings.
Summary: The paper mainly focuses on the theoretical approach to debunk the necessity of L2 norm for exhibiting grokking phenomenon. As an alternative, the authors suggest that sparsity of the solution space can be alternative condition to have grokking phenomenon in deeper layer (proven practically). The paper includes rich theoretical justification on a linear matrix fitting problem and demonstrates with 2-layer to 3-layer MLP with ReLU activation, PINN, and LSTM to justify their claim. Claims And Evidence: The paper has laid some theoretical works to disentangle the necessity of L2-type regularization and the phenomenon of grokking. Moreover, this leads to that L2 weight norm may not be a good indicator for grokking progression measure. I find potential significant issues in the manuscript: - Although suggesting that the “solution sparsity” behaves as alternative potential source of grokking is interesting and meaningful, there should be more detailed definition with rich examples what solution sparsity is. For example, is the modular arithmetic problem to recover the removed half of the binary relationships “sparse” (this is the first problem that grokking is reported on)? - There is a clear logic gap between the theoretical justification and the actual claim. I agree that the amount of theoretical works of the authors are respectable; the theory seems to be rich enough, but for its own. The theory deals with the problem with deep linear layers which is not commonly used in practice nor it is dealt in the typical literature on grokking. The way the claim that made on the linear layer connects to nonlinear experiment seems a large logical gap to me. And to address this, I would like to suggest a theoretical work on nonlinear layers. - It is already known that other factors than L2 norm can reduce grokking. For example, papers like Grokfast (arXiv 2024) have shown that enhancing a momentum-like term of the gradient descent orthogonally reduce the grokking delay. Therefore, the “connection between L2 norm and grokking” may not be as significant as the authors has initially claimed. Unless these issues are addressed in the rebuttal, I am afraid I cannot agree with the significance of the author’s claim. Methods And Evaluation Criteria: The paper tries to justify its claim theoretically especially on matrix factorization and least squares problem, and then tries to relate its conclusions on the deeper layer neural networks. Although the grokking analysis on “deep linear problems” is interesting enough, I do not agree that the conclusion for this problem trivially relates to the nonlinear network case. I have checked multiple times throughout the theoretical section, but I have not found a theory that focuses on at least two-layer nonlinear networks. There is a missing gap between the theory and the evaluation and I believe that these two phenomenon can still be regarded as separate phenomena unless a proper discussion encloses the gap. Theoretical Claims: I have tried my best to check the correctness of the proof, but I must say I have missed many theorems especially those appear in the supplementary material. Although I agree that the presented mathematical justification indeed supports the author’s claim that Euclidean norm is not necessary for grokking, I do not believe this can be trivially extended to more complicated architecture, e.g., Transformers, where the grokking phenomenon is first observed. Moreover, I am still not convinced that the theory on linear problem is extendable to nonlinear experiments. Experimental Designs Or Analyses: If the Transformer-based experiments (the one like in the first report on Grokking) are presented along with the other experiments, it would be more nicely looking. This is not a cumbersome experiment. Supplementary Material: Yes, I have read throughout the supplementary materials and checked every theoretical claims of the authors. Relation To Broader Scientific Literature: Deep linear layers are in fact has a domain specific usage, e.g., in kernel estimation problem (e.g., in KernelGAN) and other problems that require redundancy in parameter space for smoother training. However, it is not well known that grokking is significant in these model domains. There this paper might get its significance. Essential References Not Discussed: - The discussion between grokking and double descent has been quite significant, and the authors might get better insight in the joint works, e.g., "Unifying Grokking and Double Descent" - Groktransfer (ICLR 2025) and Grokfast (arXiv) both addresses different acceleration strategies other than exploiting sparsity and l2 norm. - Papers like “Grokked Transformers are Implicit Reasoners” discusses large scale experiments, too. I believe that grokking effect can have better meanings in this large scale, highly nonlinear experiments. And the concept of solution sparsity should be discussed in this large domains, on how it should be “extended”. Other Strengths And Weaknesses: Please refer to other sections of this review. I respect the author’s effort and work for providing theoretical justification in this work. However, I am not still convinced on the logical flow of this manuscript. Other Comments Or Suggestions: - In line 432, LSMT→LSTM. Questions For Authors: - Since grokking phenomenon is firstly observed in Transformer network and imposes significance the most in these types of network, why the authors only have chosen to demonstrate for MLP and LSTM networks? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and insightful comments. Please find our responses to the points below. **Claims And Evidence** * We worked in a setting where the notion of sparsity (resp. low rank) is well defined, $\mathbb{R}^n$ (resp. $\mathbb{R}^{n_1 \times n_2}$). This is the number of non-zero elements of a vector (resp. the number of non-zero singular values of a matrix). If we decide to work, say, in $\mathbb{Z}/p\mathbb{Z}$ for some prime number $p$ (like in the first problem grokking was first reported on, modular arithmetic), then these definitions are not trivially valid, and the theory we developed no longer applies (indeed, the theory of sparse recovery and matrix factorization over finite fields is very complex and uses completely different tools from those used in $\mathbb{C}$). This is why we can not answer the question, “Is the modular arithmetic problem to recover the removed half of the binary relationships sparse?” using our current theory and leave it as a future direction (see section F of the Appendix). Although for tasks such as the sparse parity task (on which grokking is observed [1, 2]) one can construct sparse MLP that fits the data [1], we are not aware of any work that has done this for modular arithmetic. [3] constructs a two-layer MLP with square activations that does modular addition, but the model weights constructed are dense. * The most crucial point of our work is to show that not only does the grokking mechanism go beyond the $\ell_2$ norm, but also why it is necessary to do so. Indeed, we show that the time to generalize is inversely proportional to the regularization strength used and that the generalization error after convergence decreases with it. Moreover, we show that grokking can also be highly accelerated by simply choosing appropriate data samples. To the best of our knowledge, no previous work on grokking has done all these. Although our theory relies on simple frameworks that allow us to extract the laws justifying our results, we also show this empirically for different types of models and datasets, including the basic setup on which grokking was first observed, the algorithmic dataset. We acknowledge that developing a theory in a simplified framework and then testing it experimentally in a more complicated setup has its flaws. The point of our paper is not to say that the mechanisms behind linear models extend to non-linear models but to illustrate that grokking is nevertheless observed if we use the insights obtained theoretically on linear models. We'll try to make this point clearer in the final version of the paper. * We acknowledge that grokking can be caused by a number of factors. This is the point of our work, even if we focus on regularization. Other work (Grokfast, etc) focuses on other factors to amplify/reduce the Grokking delay, and we will discuss this in detail in the related works section. **Methods And Evaluation Criteria, Theoretical Claims** The point of our work is not that our theoretical results on linear models **trivially** extend to non-linear models. We simply show that the mechanism behind grokking (and regularization) goes beyond the $\ell_2$ norm: we show it theoretically and empirically in specific settings and just empirically in others (and leave the theory in such settings as future work). **Essential References Not Discussed** Thanks for the references, we will discuss this in the related work section. These works differ from ours in that the first focuses on double-descent (which is not the point of our work) and the other on other strategies to accelerate grokking (we focus on different types of regularization and data selection). **Questions For Authors** We excluded Transformer results on modular arithmetic as the paper was already too long. On Transformer, $\ell_1$ and $\ell_*$ still affect the delay between memorization and generalization. **References** [1] Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit (https://arxiv.org/abs/2207.08799) [2] A Tale of Two Circuits: Grokking as Competition of Sparse and Dense Subnetworks (https://arxiv.org/abs/2303.11873) [3] Grokking modular arithmetic (https://arxiv.org/abs/2301.02679) --- Rebuttal Comment 1.1: Comment: I appreciate the responses. My original concern was on the missing gap between the author's claim on grokking in a general sense and the theoretical justification done for linear layers. To my understanding from this rebuttal, the theoretical part and the empirical part of this work still has unresolved logical gap, and one conclusion does not necessarily connects to the other conclusion, as the authors have acknowledged as: >This is why we can not answer the question, “Is the modular arithmetic problem to recover the removed half of the binary relationships sparse?” using our current theory and leave it as a future direction. Even if we disregard this gap, the theory established on linear models carry little significance in general applications of grokking phenomenon on deeper, more complex models. Therefore, I am sorry I cannot give higher score based on this rebuttal. However, I believe the author's claim of disentangling the necessity of $\ell_2$ norm and the grokking phenomenon is indeed interesting, and I would like to recommend the authors to further develop on this work.
Summary: The authors study delayed generalization (grokking) in sparse/low rank recovery tasks. The authors focus on the transition from an overfitting solution to a generalizing solution, both in linear sparse recovery and in low rank matrix factorization. For the linear cases, the authors derive the scaling of the grokking time difference. They show that for sparse tasks, grokking can be tracked and controlled by the $L_1$ or the trace norm of the weights or dictionary vectors. Claims And Evidence: The claims are both proven and supported by empirical evidence. Methods And Evaluation Criteria: The problem is well set up and the evaluation criteria make sense fo this problem. Theoretical Claims: I only checked the main theorem 2.1, while the rest are somewhat demonstrated directly by the experiments. I would state that the paper contains 81 pages, and it would be unreasonable to check in its entirety in the allotted time for review. Experimental Designs Or Analyses: The main experiments are simple and done on convex problems, and are therefore reproducible with the parameters given in the paper. Supplementary Material: I reviewed only a small fraction of the supplementary material related to the formal proofs, most of the results themselves are not new, but relating them to grokking is. I also skimmed some of the different experiments provided which seem to be consistent with the statements in the text. Relation To Broader Scientific Literature: The contributions presented in this work are related to the broader scientific literature, in particular they relate to grokking in linear estimators, where delayed generalization can occur even in convex problems depending on certain parameters of the setup. They extend these ideas to more complicated settings where a clearly defined overfitting and generalizing solutions exist. Essential References Not Discussed: No essential references clearly come to mind for the specific context of this paper, certainly not all grokking papers are cited but since the focus is not on feature learning but just on the transition between possible solutions. Perhaps the authors could contrast their work more with the feature learning works such as [1,2]. Refs: [1] - GROKKING AS A FIRST ORDER PHASE TRANSITION IN TWO LAYER NETWORKS, Rubin et al., ICLR 2024 [2] - GROKKING AS THE TRANSITION FROM LAZY TO RICH TRAINING DYNAMICS, Kumar et al., ICLR 2024 Other Strengths And Weaknesses: **Strengths:** 1) The problem setup and topic are very interesting in my opinion, and are of interest to the DL community. I have not seen works that focus on the relation between grokking and sparsity in this way, and I believe it can be a key contributor to this phenomenon in many real world settings. 2) The results seem robust and correct, and provide a new way to look at progress measures of grokking, possibly in problems where the problem is not tractable analytically. **Weaknesses:** The main and overwhelming weakness of this paper is its extreme breadth and length. The authors attempt to cover too many setups and end up compromising quality for quantity. By attempting to tackle sparse recovery, deep sparse recovery, data selection, low rank recovery, matrix completion, and extending to nonlinear cases, the main text cannot provide more than a shallow discussion for many of these cases, which leads to the following problems: - The paper is poorly written, both in terms of the number of typos/grammatically incorrect sentences. - When theorems are given, there are no sketches of the proofs, which should provide the intuition and deep insight as to why these results work and make sense, but instead just given in terms of full proofs in the appendices, which are $O(70)$ pages. This is not the correct format for a conference paper. - Many results and figures do not appear in the main text, but are referred to the appendices. This would be fine if not for comments such as *"While grokking has been studied extensively, the impact of data selection on grokking remains largely unexplored, making this one of the first works to address this critical aspect."* Either a topic is important enough to appear fully in the main text and discussed seriously, or it is not truly critical, it cannot be both. Results in appendices should be either trivial extensions that do not add any deep insights, or indeed proofs, when the sketch is given in the main text. Another example is the Deep Sparse Recover paragraph, which directly points to the appendix, as well as the Realistic Signals paragraph. All of these should either be removed from this work or discussed seriously in the main text. - There is no mention of the second effect that typically arises in grokking which is the non-monotonicity of the test loss while the training loss monotonically decreases. This is a minor point but is never discussed. This weakness is the reason for my inclination to reject this paper at the current version. Other Comments Or Suggestions: **Partial list of typos and grammatical errors:** - The paper has no page numbers at the bottom, not sure how this happened. - L20 (right) - "we always need an ℓ2 regularization term." - L23 (right) - "the dynamics of" - L37 (right) - "previous art" - L62 - “with an mlp” - L125 - "number of samples needs" should be "needed" - L163 - "since for problem of interest", should be "for the..." - L171 - "If β2 is choose such.." - L216 - "number of measures N...", should be "samples/measurements" - $\tau$ is not defined before it is used in the text, and it is unclear if the error term $\xi$ is always meant to be a constant vector or a random vector which is different per sample, etc. There are more which I have not included here. **Suggestions:** My main suggestion given the weaknesses and strengths of the paper is simple: divide this paper into at least 2 different works, one which simply focuses on sparse recovery/deep sparse recovery, and explains the related results in depth in the main text, including sketches of proofs, intuition, and dependence on the relevant parameters. Then a separate work that focuses on low rank estimation, or putting the low rank estimation in the appendix of the first work, since it doesn't seem to provide a much deeper insight except for replacement of the lasso norm with the nuclear norm. Then I would suggest a concrete rewriting of the paper to ensure that the grammatical mistakes do not repeat and improving the structure. I would also suggest that the authors consider describing the problem in terms of its invariant quantities that actually dictate the scaling, for instance $\alpha \beta_1$ is one of the main scaling quantities, not $\alpha,\beta_1$ separately. It would be smart to simply define products and ratios of the original parameters and study the problem in terms of these quantities, to avoid cumbersome notations and clearer takeaway messages. Questions For Authors: **Questions:** 1) What is the overlap between $b(t_1)$ and $b(t_2)$? What changes more, the norm or the direction? 2) Can you comment on the fact that the test loss is always monotonic in your settings? in general it is possible to have a non monotonic test loss even for a convex problem if the transition from overfitting to generalizing solution involves a big change in the parameters (either alignment or norm), so I'm curious if you can explain why it doesn't happen in your case. In some figures (fig 5 for example) you do have it, but only when it is accompanied by a previous non-monotonic effect in the training loss, which is non generic. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their time and insightful comments. We appreciate the effort taken to evaluate our work and provide constructive feedback. Below, we address each of the points raised. **Weaknesses** We understand that the paper is poorly organized. That is why, after submission, we completely rewrote it as follows: * All the proofs have been simplified, and we've included the proof sketch for each main theorem. For example, here's the main idea behind Theorem 2.1. We first show (using standard results from compressed sensing literature) that, given a matrix $\tilde{\mathbf{X}} \in \mathbb{R}^{N \times n}$ and a vector $\mathbf{b}^* \in \mathbb{R}^{n}$ of sparsity $s$ ($\|\mathbf{b}^*\|_0 \le s$), we can write, for any vector $\mathbf{b} \in \mathbb{R}^{n}$, $\|\mathbf{b} - \mathbf{b}^*\|_2 \le C_1 \| \tilde{\mathbf{X}} \left(\mathbf{b} - \mathbf{b}^*\right)\|_2 + C_2 | \|\mathbf{b}\|_1 - \|\mathbf{b}^*\|_1|$ where $C_1$ and $C_2$ are constants that depend only on $s$ and $\tilde{\mathbf{X}}$. After a short amortization phase ($t \le t_1$), the term $\| \tilde{\mathbf{X}} \left(\mathbf{b}^{(t)} - \mathbf{b}^*\right)\|_2$ vanish (or becomes proportional to the noise if there is any), since we have $ \tilde{\mathbf{X}} \mathbf{b}^{(t)} \approx \mathbf{y}^* = \tilde{\mathbf{X}} \mathbf{b}^* + \xi$. Then comes a second phase, during which $\|\mathbf{b}^{(t)}\|_1$ converges to $\|\mathbf{b}^*\|_1$ at $t_2$: our contribution is to show that $t_2 - t_1$ is inversely proportional to $\alpha$ (the learning step) and $\beta_1$ (the regularization strength). * We made the body of the paper self-contained insofar as any contribution mentioned in the abstract and introduction is proven in the body of the paper theoretically and/or empirically (grokking time, effect of data selection on grokking, limits of $\ell_2$, and grokking without understanding, ...). So, the results in the appendix this time are just extensions of those in the main text. * With the rewrite, the appendix has also been reduced from $\mathcal{O}(70)$ to less than $50$ pages. * We have left the part on matrix factorization in the main text since the mechanics behind the generalization phase, in this case, differs from sparse recovery. In sparse recovery problems, we take into account only the iterate $\mathbf{b}^{(t)}$ (especially its $\ell_1$ norm), whereas, in matrix factorization, we take into account not only the singular value matrix $\Sigma^{(t)}$ (especially its $\ell_1$ norm) of the iterate $\mathbf{A}^{(t)}$, but also its singular vectors $\mathbf{U}^{(t)}$ (left) and $\mathbf{V}^{(t)}$ (right), which makes the dynamic in the second case completely different from the first case. Also, in a matrix factorization problem, replacing $\ell_*$ (the nuclear norm regularization) by $\ell_{*/2}$ does not induce grokking (at least in the shallow case used in the theory). Thanks for the suggestion about figures and parameters of interest. We will define a parameter $\tilde{\alpha} = \alpha \beta$, which is the main parameter in the theory ($\alpha$ is the learning rate and $\beta$ is the regularization). **What is the overlap between $\mathbf{b}^{(t_1)}$ and $\mathbf{b}^{(t_2)}$? What changes more, the norm or the direction?** Once memorization occurs ($\mathbf{X}\mathbf{b}^{(t_1)}=\mathbf{X}\mathbf{b}^*+ \xi$), the direction of $\mathbf{b}^{(t)}$ that explains the data does not need to change drastically. The main change from $t_1$ to $t_2$ is the magnitude of $\mathbf{b}^{(t)}$ in $\ell_1$—i.e., it transitions from something closer to the least‐square solution’s norm $\|\hat{\mathbf{b}}\|_1$ down to the teacher’s norm $\|\mathbf{b}^*\|_1$ (while still maintaining some form of memorization). So, $\mathbf{b}^{(t_1)}$ and $\mathbf{b}^{(t_2)}$ are closely aligned in direction; the main difference is that the $\ell_1$ norm shrinks from $\|\hat{\mathbf{b}}\|_1$ to $\|\mathbf{b}^*\|_1$. Hence, the norm changes more than the direction. **Can you comment on the fact that the test loss is always monotonic in your settings?** The monotonic decrease in test loss is due to how we choose the hyperparameters, mainly the learning rate and the regularization strengths. In the case of sparse recovery, for example, the test loss is controlled mainly by $\|\textbf{b}^{(t)}\|_1$, even more so after memorization, which takes only a few steps. After this memorization, $\|\textbf{b}^{(t)}\|_1$ converge very slowly towards $\|\textbf{b}^*\|_1$. When $\alpha$ and/or $\beta_1$ are larger than the values used in the paper, there are indeed a lot of oscillations in test loss. The same reasoning applies to experiments outside the linear scope (e.g., figure 5). We chose the learning rate and regularization strength to illustrate grokking. Larger values show oscillations (e.g., the Slingshot Mechanism [1]). [1] The Slingshot Mechanism: An Empirical Study of Adaptive Optimizers and the Grokking Phenomenon (https://arxiv.org/abs/2206.04817) --- Rebuttal Comment 1.1: Comment: I thank the authors, and appreciate their willingness to accept the criticism. I believe that the paper definitely has merit and the results are sound. However, due to the format of ICML, it is impossible for me to review a revised version, and since the revised version includes a full rewriting, I cannot raise my score in good faith. I hope the authors understand, and should the paper not be accepted in this iteration I highly recommend they resubmit the fully revised version to another venue.
null
null
null
null
null
null
When Maximum Entropy Misleads Policy Optimization
Accept (poster)
Summary: The paper examines failure cases of Maximum Entropy (MaxEnt) RL, showing how entropy maximization can mislead policy optimization. It introduces the Entropy Bifurcation Extension, a theoretical construct proving that MaxEnt RL can drive policies toward suboptimal actions (Theorem 5.5, Proposition 5.6). Empirical results demonstrate SAC failures in vehicle control, quadruped locomotion, and drone tracking due to misleading entropy landscapes. The authors explore adaptive entropy tuning as a mitigation strategy but acknowledge its limitations. ## update after rebuttal I appreciate the authors’ detailed response and additional experimental results. The clarification about the theoretical setting and the empirical motivation for SAC-AdaEnt helped address my main concerns. The new comparisons with Soft Q-Learning and discussions of performance across more environments (e.g., Hopper, Vehicle, Quadrotor) improved the empirical grounding of the claims. I still believe that broader algorithmic comparisons and more extensive ablations (e.g., on entropy scaling) would further strengthen the paper, but I find the core contributions valuable. I maintain my score, and I look forward to seeing the suggested revisions incorporated into the final version. Claims And Evidence: 1. The claim that entropy can mislead policy optimization is well-supported by both theoretical analysis and empirical results. 2. Theoretical constructs such as the Entropy Bifurcation Extension and its proofs are rigorously presented and demonstrate the misalignment between MaxEnt RL and true optimal policies. 3. The empirical experiments clearly show cases where SAC fails due to entropy distortion. Methods And Evaluation Criteria: 1. The evaluation includes diverse continuous control tasks, which are relevant benchmarks for MaxEnt RL. 2. The paper does not propose a new RL algorithm but instead provides an analysis of existing methods, which may limit its impact as a technical contribution. Theoretical Claims: 1. The theoretical analysis correctly identifies failure modes of MaxEnt RL and proves that entropy can mislead policies into suboptimal behaviors. 2. The Entropy Bifurcation Extension (Theorem 5.5, Proposition 5.6) is a novel and valid mathematical construct that highlights the downsides of entropy maximization. 3. The theory does not offer a general solution to these failures but instead highlights when and why they occur. Experimental Designs Or Analyses: Advantages 1. The experiments effectively demonstrate failure cases of MaxEnt RL in vehicle control, quadruped locomotion, and drone tracking tasks. 2. The empirical results support the theory, showing how SAC fails due to misleading entropy landscapes. Limitations 1. The study focuses on SAC but does not evaluate whether other entropy-based methods (e.g., MPO, Soft Q-Learning) experience similar failures. 2. Could benefit from conducting ablation studies on entropy-related hyperparameters, such as temperature scaling, to analyze their role in failure cases. Supplementary Material: The supplementary material provides extended proofs and additional technical details, which improve the clarity of the theoretical contributions. Besides, no additional experimental results are provided beyond what is already in the main paper. Relation To Broader Scientific Literature: The paper contributes to the understanding of when entropy regularization fails in RL, contrasting with prior work that highlights its benefits (e.g., Ahmed et al.,2019)​. By providing a complementary perspective, it challenges the assumption that entropy always improves policy optimization and explores conditions where it can instead mislead learning. Essential References Not Discussed: The paper does not cite Ahmed et al., "Understanding the Impact of Entropy on Policy Optimization" (ICML 2019), which analyzes how entropy smooths optimization landscapes and improves convergence. Since this submission focuses on failure cases of entropy regularization, discussing Ahmed et al. would provide a more balanced perspective on when entropy is beneficial versus when it may mislead policy optimization. Other Strengths And Weaknesses: Strengths: 1. The paper provides new theoretical insights into the failure cases of MaxEnt RL. 2. The experiments are well-designed and effectively demonstrate the theoretical claims. 3. The Entropy Bifurcation Extension is a novel construct for analyzing RL policies. Minor Issue: 1. The paper does not propose a new method, making its contribution primarily analytical rather than algorithmic. Other Comments Or Suggestions: 1. Clarifying how adaptive entropy tuning compares to alternative exploration techniques (e.g., intrinsic motivation, Bayesian exploration) would add value. 2. A more detailed comparison between SAC and other entropy-based RL methods (e.g., MPO, Soft Q-Learning) would provide a broader picture. Questions For Authors: 1. How does the failure mode extend to other entropy-based RL methods? 2. How sensitive is the failure mode to the choice of reward scaling and environment structure? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive feedback and questions. Indeed, we focused on understanding the principles behind potential limitations of MaxEnt (SAC being its best performing and most widely-used version), complementing most existing results on the benefits of entropy regularization. Although we phrase the effect as "misleading", we showed that it can be either beneficial or harmful in different environments (Section 6.2 discusses its benefits). Consequently, the formulation of the algorithm SAC-AdaEnt is intended more as an ablation study mechanism to empirically test the validity of our theoretical analysis in practice, rather than as a new candidate of MaxEnt algorithms. At the same time, we definitely agree with the reviewer that evaluating the algorithm on more baselines and hyperparameter changes can strengthen the empirical understanding and contribution of the paper. Here we respond to the questions directly, and will extend our paper along these lines accordingly. **Q: (More MaxEnt baselines)** > How does the failure mode extend to other entropy-based RL methods? A more detailed comparison between SAC and other entropy-based RL methods (e.g., MPO, Soft Q-Learning) would provide a broader picture. **A:** We will follow your suggestion and expand the experiment section with more baseline MaxEnt algorithms. Here we provide results from additional experiments comparing with Soft Q-learning (SQL) and with adaptive entropy (SQL-AdaEnt). We observe that the overall performance of SQLs generally underperforms compared to SAC, and SQL is affected by the misleading effect in the Vehicle and Quadrotor environments as well, in which case SQL-AdaEnt also improves the performance. In the table below, we summarize the mean and variance of the policy return in the environments of Vehicle, Quadrotor (as examples of negative misleading effect) and Hopper (positive misleading effect). We see that SAC generally performs better than SQL across all environments, and the effect of AdaEnt tends to be similar on SQL and SAC. That is, in Vehicle and Quadrotor, where SQL and SAC struggled, the AdaEnt component led to better performance. The improved performance of SQL-AdaEnt on Hopper may be due to SQL being a relatively weak baseline. | **Algorithm**| **Vehicle** | **Quadrotor** | **Hopper** | |-|-|-|-| | SQL| -2715.48 ± 453.00 | -6082.51 ± 1632.35 | 2998.21 ± 158.19 | | SQL-AdaEnt | -2077.59 ± 266.84 | -4499.35 ± 863.73 | 3115.94 ± 25.19 | | SAC ($\alpha=0.2$) | -2003.85 ± 867.82 | -475.29 ± 244.96 | 3484.46 ± 323.87 | | SAC auto-$\alpha$ | -1551.96 ± 636.88 | -666.62 ± 233.19 | 2572.00 ± 901.35 | | SAC-AdaEnt| -1250.45 ± 725.40 | -247.58 ± 45.15 | 3285.17 ± 958.43 | The results align with the predictions of our theoretical analysis, which showed that the misleading effect arises from the maximum entropy objective itself. As long as an algorithm aims to optimize policies to reduce divergence from the Boltzmann policy which is optimal under the MaxEnt objective, its performance can be negatively affected in situations that we analyzed in the paper. **Q: (More comparisons and ablation study)** > Clarifying how adaptive entropy tuning compares to alternative exploration techniques (e.g., intrinsic motivation, Bayesian exploration) would add value. How sensitive is the failure mode to the choice of reward scaling and environment structure? Could benefit from conducting ablation studies on entropy-related hyperparameters, such as temperature scaling, to analyze their role in failure cases. **A:** These are very important suggestions that we will follow to strengthen the empirical analysis in the paper. In general, our experience is that policy optimization methods that encourage more exploration generally underperform in control environments where low-entropy policies are the key to success, compared to conservative and more greedy approaches such as PPO. On the other hand, similar to our discussion in Section 6.2, in many environments the "misleading" effect of MaxEnt and enhanced exploration strategies can be crucial to performance. Based on the theoretical analysis framework that we proposed in this paper, we believe it is promising to develop fine-grained analysis methods that take into account of both the specific structure of an environment to inform the hyperparameter choices and exploration strategies. **Q: (Related Work)** > The paper does not cite Ahmed et al., "Understanding the Impact of Entropy on Policy Optimization" (ICML 2019) **A:** The paper was cited in the Related Work section on Line 90 as (Ahmed et al. 2019). We will add more discussion in the paper for this and similar analysis on the benefits of entropy regularization. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. Your new empirical results and willingness to extend the paper accordingly are helpful. Also, I apologize for my earlier oversight regarding the citation of Ahmed et al. (2019). I acknowledge that the paper is indeed cited in your Related Work section, and I appreciate your clarification. I look forward to seeing these improvements reflected in the final version of the paper.
Summary: The paper examines the trade-off in MaxEnt RL: while the entropy term fosters exploration and robustness, it can mislead policy optimization in tasks requiring precise, low-entropy actions, often causing MaxEnt methods like SAC to converge to suboptimal policies. The authors formalize this phenomenon with a toy example and introduce the Entropy Bifurcation Extension, showing that auxiliary states can create misleading soft Q-value landscapes that divert the MaxEnt-optimal policy from the true optimal one. To address this, the paper proposes SAC-AdaEnt, an adaptive tuning mechanism that monitors discrepancies between soft and plain Q-values, dynamically shifting policy updates to rely more on plain Q-values when needed. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes, Theoretical Claims: Yes. Experimental Designs Or Analyses: I examined the experimental designs and analyses in the paper, particularly those illustrated in Figures 4–11, which involve continuous control tasks such as Vehicle control, Quadrotor control, and standard benchmarks like Acrobot, Obstacle2D, and Hopper. The paper compares SAC (with automated entropy adjustment) against PPO and the proposed SAC-AdaEnt. This setup is generally appropriate to highlight the effect of entropy on policy optimization. However, the choice of baselines is somewhat narrow. Including additional variants or state-of-the-art Max-Ent algorithms could have provided a more comprehensive comparison. Supplementary Material: Part A and Part D. Relation To Broader Scientific Literature: The paper builds on and extends several streams in the RL literature - MaxEnt RL, PPO, exploration versus exploitation. Essential References Not Discussed: No. Other Strengths And Weaknesses: While the adaptive mechanism (SAC-AdaEnt) is promising, the paper itself notes that measuring discrepancies between soft and plain Q-value landscapes requires a “global” understanding at each state. In high-dimensional environments, this might become computationally expensive or infeasible. Other Comments Or Suggestions: See Questions. Questions For Authors: 1. The paper does not specifically emphasize that PPO also uses an entropy term. While PPO does incorporate an entropy bonus to encourage exploration, this is treated as a secondary regularizer rather than a core objective. In contrast, the MaxEnt framework—exemplified by SAC—integrates entropy maximization directly into the learning objective, which can lead to the misleading effects discussed in the paper. This distinction is critical, as it highlights why PPO may avoid some of the pitfalls of overemphasizing entropy, thereby achieving more precise control in performance-critical tasks. 2. In Figure 11, the SAC-auto-alpha refers to the popular automated entropy adjustment trick or the AdaEnt mechanism proposed in this paper? Could you compare the SAC-AdaEnt to the SAC-automated-alpha-tuning? 3. The paper introduces an adaptive entropy tuning mechanism, SAC-AdaEnt, which effectively mitigates the misleading effects of high entropy in the SAC framework. However, it is noteworthy that the proposed tuning mechanism is applied exclusively to SAC. It would enhance the paper if the authors could discuss the potential for extending this adaptive mechanism to other MaxEnt-based algorithms. Such a discussion could broaden the applicability of their approach and provide insights into whether similar benefits could be realized across a wider range of RL methods. --- **Updated Review:** Thanks for your response — I think the problem you’re tackling is important, and the added experiments are helpful. I remain positive about the paper. Just a suggestion: it would be stronger if the main comparison focused on SAC with and without the entropy term. I know you included this in Figure 6, but putting it earlier might help clarify the main point. Since SAC and PPO differ in many ways, it’s hard to tell if the gap in Figure 1 is really due to entropy. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive feedback and questions. We agree with the reviewer that comparisons with wider range of baselines will enhance the empirical analysis, and we will expand the paper accordingly. Our current formulation of SAC-AdaEnt is mainly intended for empirically testing the theoretical analysis, to show how the misleading effect practically affects policy optimization (in both negative and positive ways). As our theoretical analysis shows, the effect is rooted in the use of the MaxEnt objective, and thus we believe the principles are applicable to MaxEnt algorithms in general. We discuss additional experiments with Soft Q-Learning below as an example that reflects this. **Q: (PPO with entropy)** > (Entropy in PPO) is treated as a secondary regularizer rather than a core objective. This distinction highlights why PPO may avoid some of the pitfalls of overemphasizing entropy. **A:** Thank you for pointing this out: it exactly aligns with our understanding. Our core construction of entropic bifurcation relies on the fact that the objective of policy optimization in MaxEnt algorithms is to match the Boltzmann distribution, which fundamentally changes the goal of the optimal policy and enables the misleading effect of entropy. In policy gradient methods using entropy as exploration strategies, the objective for policy optimization is unaffected, and thus entropy regularization does not lead to misleading effects as how we formulated in the paper. **Q: (SAC-auto-alpha v.s. SAC-AdaEnt)** > In Fig.11, the SAC-auto-alpha refers to automated adjustment? Could you compare the SAC-AdaEnt to the SAC-automated-alpha-tuning? **A:** Yes, in Fig.11, SAC-auto-alpha refers to automated entropy adjustment. We show more results in the next question, and will update the plots in the paper along with AdaEnt applied to other MaxEnt baselines, as you suggested. Note that SAC-auto-alpha applies a uniform entropy adjustment across all states while SAC-AdaEnt adapts the entropy individually per state. **Q: (Baselines)** > The choice of baselines is narrow; It would enhance the paper if extending the adaptive mechanism to other MaxEnt algorithms." **A:** We will follow the reviewer's suggestion and expand the experiment section with more baseline MaxEnt algorithms. Here we provide results from additional experiments comparing with Soft Q-learning (SQL) and with adaptive entropy (SQL-AdaEnt). We observe that the overall performance of SQLs generally underperforms compared to SAC, and SQL is affected by the misleading effect in the Vehicle and Quadrotor environments as well, in which case SQL-AdaEnt also improves the performance. In the table below, we summarize the mean and variance of the policy return across Vehicle, Quadrotor (as examples of negative misleading effect) and Hopper (positive misleading effect). We see that SAC generally performs better than SQL across all environments, and the effect of AdaEnt is similar on SQL and SAC. That is, in Vehicle and Quadrotor, where SAC struggled, the AdaEnt component led to better performance. The improved performance of SQL-AdaEnt on Hopper may be due to SQL being a relatively weak baseline. | **Algorithm**| **Vehicle** | **Quadrotor** | **Hopper** | |-|-|-|-| | SAC ($\alpha=0.2$) | -2003.85 ± 867.82 | -475.29 ± 244.96 | 3484.46 ± 323.87 | | SAC auto-$\alpha$ | -1551.96 ± 636.88 | -666.62 ± 233.19 | 2572.00 ± 901.35 | | SAC-AdaEnt| -1250.45 ± 725.40 | -247.58 ± 45.15 | 3285.17 ± 958.43 | | SQL| -2715.48 ± 453.00 | -6082.51 ± 1632.35 | 2998.21 ± 158.19 | | SQL-AdaEnt | -2077.59 ± 266.84 | -4499.35 ± 863.73 | 3115.94 ± 25.19 | The results align with the predictions of our theoretical analysis, which showed that the misleading effect arises from the MaxEnt objective itself. As long as an algorithm aims to reduce divergence from the Boltzmann policy, the performance can be negatively affected as analyzed in the paper. **Q: (Scalability)** > While the adaptive mechanism is promising, measuring discrepancies requires a “global” understanding at each state. In high-dimensional environments, ... computationally expensive. **A:** Yes, we formulated SAC-AdaEnt to test the theory in practice, acknowledging that it is not intended as a scalable replacement for SAC. As we shown in Section 6.2, the misleading effects of entropy may explain some of the key benefits of SAC, and thus we do not always need to eliminate the effect. We believe a promising direction in challenging environments is to use baseline SAC and PPO algorithms to understand the gap between Q-plain and Q-soft landscapes first, and then modulate the use of entropy on specific regions in the state space to minimize the need for global probing. Consequently, we focused mainly on analyzing the principles of how entropy affects policy optimization, in the hope that the understanding leads to more nuanced policy optimization strategies in challenging environments.
Summary: The authors analyze the trade-off between exploration/robustness and exploitation in Maximum Entropy Reinforcement Learning though a variety of control tasks. The paper demonstrate that in performance-critical control tasks requiring precise, low-entropy actions, Maximum Entropy approaches can become misguided by their entropy-driven exploration, which help readers to understand how entropy maximization affects policy optimization. Claims And Evidence: Yes, main claims made in the submission are generally supported by clear evidence. Methods And Evaluation Criteria: The analysis of the misleading effect of maximum entropy RL is conducted on different robotics control environments with 5 random seeds. Results are compared to PPO to highlight performance gaps and benefits of entropy maximization at different situations. Authors also proposed SAC with Adative Entropy Scaling, SAC-AdaEnt to further show the negative effective of soft Q values in some situations. Theoretical Claims: Theoretical part looks ok to me. Experimental Designs Or Analyses: The authors demonstrate empirically with a toy example and experiments on complex control tasks to show explicit situations where maximizing entropy leads to convergence towards suboptimal, high-entropy solutions, which I found is very clear and sound. Issues: the authors also propose a possible solution, SAC-AdaEnt. SAC-AdaEnt is then compared with standard SAC, however, limited discussion/analysis is on SAC-AdaEnt, for example, will SAC-AdaEnt decreases performance in tasks where SAC generally succeed (e.g. Hopper)? Supplementary Material: Briefly look at the pseudocode for SAC-AdaEnt. Relation To Broader Scientific Literature: The paper is connected to important literature areas of maximum entropy RL and exploration-exploitation trade-off. By clearly illustrating situations where maximizing entropy can degrade performance, this paper significantly extends the understanding of limitations inherent in widely used maximum entropy methods such as SAC. This paper also explicitly compares non-maximum entropy algorithm PPO against SAC, highlighting trade-offs between entropy-driven exploration and precise exploitation on practical tasks. With these discussions, I think the paper can help people improve current maximum-entropy algorithms int he future. Essential References Not Discussed: NA Other Strengths And Weaknesses: See above sections Other Comments Or Suggestions: See above sections Questions For Authors: Were entropy-traps also expected or observed in high-dimensional image-based RL tasks? Will SAC-AdaEnt decreases performance in tasks where SAC generally succeed (e.g. Hopper, or other Mujoco envs)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive feedback and questions. We agree that more discussion and analysis can be done for SAC-AdaEnt, and we provide some more details below. Indeed, we formulated SAC-MaxEnt less as an algorithm that is intended to replace SAC, but more as an ablation mechanism that tests the validity of the theoretical analysis in practical learning environments. We believe the principled understanding of the misleading effect of MaxEnt algorithms (and how it can both positively and negatively affect policy learning) opens up the possibility of developing more customized policy optimization procedures for challenging environments that exploit their specific structures and value landscapes. **Q: (SAC-AdaEnt)** > Will SAC-AdaEnt decrease performance in tasks where SAC generally succeeds? **A:** Following the question, we performed additional experiments on the environments in the paper where SAC generally succeeds: Hopper, Obstacle2D, Acrobot (also attached results for Vehicle and Quadrotor, in which SAC fails). The summary of performance comparison is shown in the table below, and we will add the plots and more discussion in the paper. We observe that SAC-AdaEnt led to less mean return compared SAC as well as higher variance. Note that our design of SAC-AdaEnt maintains the entropy-based exploration component in SAC (only changing the policy optimization objective). Thus the overall performance still benefits from exploration, and the higher variance is the result of the advantage landscape being less dominated by entropy. The obstacle2D and acrobot environments are lower-dimensional, and thus exploration itself is likely sufficient for discovering successful trajectories, and thus the performance has not degraded in SAC-AdaEnt. | **Algorithm** | **Hopper** | **Obstacle2D** | **Acrobot** | **Vehicle** | **Quadrotor** | |---------------------|--------------------|--------------------|---------------------|---------------------|---------------------| | SAC | 3484.46 ± 323.87 | 501.98 ± 0.62 | -45.25 ± 7.94 | -2003.85 ± 867.82 | -475.29 ± 244.96 | | SAC-AdaEnt | 3285.17 ± 958.43 | 501.50 ± 0.57 | -36.31 ± 16.42 | -1250.45 ± 725.40 | -247.58 ± 45.15 | Overall, we believe the experiments show that in environments where SAC performs well, SAC-AdaEnt can still rely on entropy to support exploration, while the performance may benefit less from the tendency of the standard SAC in escaping action regions with raw high Q-values. **Q: (High-dimensional RL)** > Were entropy-traps also expected or observed in high-dimensional image-based RL tasks? **A:** Entropy-traps are fundamentally dependent on the reward structure and the environment dynamics in the MDP, in terms of whether the success of learning relies on low-entropy policies. Thus the misleading effects can definitely be observed in high-dimensional RL problems. We believe the key to observing entropy traps may not always be about the dimensionality of the state space, but more crucially determined by the dimensionality of the action space -- when the action space is high-dimensional, it is more likely that the feasible policies form a low-entropy distribution over the space, thus making it easier to observe the misleading effect of MaxEnt. We believe such problems can arise frequently in high-dimensional RL problems in general, such as fine-tuning of language models, foundation Vision-Language-Action models, as well as diffusion policies.
Summary: This paper posits that the entropy maximization objective in SAC can lead to failure in tasks that require precise, low-entropy policies, essentially "misleading" policy optimization. Claims And Evidence: No. There are several problematic claims. 1. The central claim about the misleading effect relies on $\alpha = 1$, whereas it is well-known that the entropy coefficient needs to be tuned according to the environment. L627 says "without loss of generality, we use $\alpha = 1$ for the entropy coefficient", but this assumption does break generality. Naturally, with a high value of $\alpha$, SAC is not expected to work, so this "theoretical" calculation for the toy example does not do anything meaningful. Crucially, for the special case of $\alpha = 0$, the MaxEnt effect of SAC would turn off, and the optimal policy would not suffer from the misleading effect. So, trivially, one cannot assume a value of $\alpha$ without losing generality. In fact, the entire argument of the paper relies on a large enough value of $\alpha$. If $\alpha$ was not large, then there would not be any misleading effect at all. 2. "However, MaxEnt methods have also been shown to struggle with performance-critical control problems in practice, where non-MaxEnt algorithms can successfully learn" is a very vague claim in the abstract, and the works cited in this paper do not give a conclusive evidence of MaxEnt algorithms always struggling in important control problems. As a matter of fact, SAC is indeed applied to many useful real-world control problems. 3. "When a more realistic dynamics model for the quadrotor is used, then SAC always fails, while PPO can succeed under the same initialization and dynamics." — the term more realistic is vague and hand-wavy. 4. "In both the Vehicle and Quadrotor environments, the policy learned by SAC-AdaEnt mostly corrects the behavior of the SAC policy, as illustrated in their overall trajectories and the critical shown in the plots." is not a valid claim given the negligible improvement shown with AdaEnt in Figure 9. Methods And Evaluation Criteria: - Method: No new method is proposed and without understanding the role of $\alpha$, the SAC and PPO methods tested are not enough to understand the claims behind the role of entropy. - The selected benchmarks seem to be arbitrarily selected and "complex dynamics" is used without any justification of what is complex. Theoretical Claims: The analysis in Appendix A is meaningless if the authors assume $\alpha=1$ because they assume a fixed and large value of $\alpha$. Naturally, if one changes the RL objective from reward optimization too much, then the policy optimization would fail at convergence. Experimental Designs Or Analyses: - The experiments only analyze role of SAC at a fixed $\alpha$, which does not really represent the importance or drawbacks of the entropy term in SAC, because the entropy coefficient plays a key part. - The comparison between SAC and PPO is not exactly fair because one is off-policy and the other is on-policy. Supplementary Material: Appendix A. Relation To Broader Scientific Literature: Many claims are not justified well, and papers like Tan & Karakose (2023) are cited to claim that SAC delivers suboptimal solutions in comparison to PPO in complex control problems, which is not shown in those papers. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The central claim that high entropy regularization leads to poor policy optimization is trivial, which SAC already solves by having a tunable entropy coefficient $\alpha$. The analysis with a fixed high value of $\alpha$ in this paper does not offer any useful insights. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback and questions. We will reorganize the writing to make the core claims clear from the beginning of the paper. The toy example seems to have obscured the core results, which we further explain here. **Q: Large $\alpha$ value?** > - In fact, the entire argument of the paper relies on a large enough value of $\alpha$. If $\alpha$ was not large, then there would not be any misleading effect at all. > > - The analysis in Appendix A is meaningless if the authors assume because they assume a fixed and large value of alpha. > > - SAC already solves by having a tunable entropy coefficient. The analysis with a fixed high value does not offer any useful insights. **A:** We do not assume large fixed $\alpha$ values in the core results. We agree with the reviewer that under a large $\alpha$ value it seems obvious that the learning problem is changed. This is why in Section 3 we derive the general result: the misleading effect can be shown with *arbitrarily small* positive $\alpha$. All theorems in Section 5 treat $\alpha$ as a variable in the definitions and proofs to construct the bifurcation extension for misleading effects. To see how: for an arbitrary $\alpha$, the optimal MaxEnt policy follows $\pi^*(a|s)=\exp(\alpha^{-1}Q(s,a))/Z(s))$ with normalizing factor $Z(s)=\int \exp(\alpha^{-1}Q(s,a))da$ (see [Haarnoja, 2018b] Eq (4)). The Q terms are thus canceled out in the state values: \begin{align*} V(s)&=\mathbb{E}_{a\sim\pi^*}[Q(s,a)-\alpha \log(\pi^*(\cdot|s))]\\\\ &=\mathbb{E}_a[\cancel{Q(s,a)}-\alpha\cdot\bigg(\cancel{\log(\exp}(\alpha^{-1}\cancel{Q(s,a)}))-\log Z\bigg)] \\\\ &=\mathbb{E}_a[\alpha \log Z]\\\\ &=\alpha \log(Z) \end{align*} Consequently, the misleading effect preserves as long as the value ordering between good and bad states incorporates the scaling effect of $\alpha$. Moreover, with our definition of the bifurcation extension, if MaxEnt is misled for any small $\alpha$, then with any $\alpha'\geq \alpha$ the MaxEnt is misled as well. Thus the results apply to auto-tuning by choosing $\alpha$ to be smaller than the final weight after the decay schedule in SAC with autotuning. Specifically, the toy example is designed to allow the use of any positive $\alpha$. For instance, say $\alpha=0.00001$, then we would multiply with $\alpha^{-1}$ in the exponents on Line 638 and 642 in Appendix A.1.1. Thus, to maintain the misleading effect we can scale all rewards by $\alpha$ to cancel out $\alpha^{-1}$, which preserves the same integrals and the ordering of $\log Z(s_g)<\log Z(s_b)$. Thus, in the example $Q(s_0,a\in A_1)=\gamma V(s_g)=\gamma\alpha \log(Z(s_g))<\gamma\alpha \log(Z(s_b))=Q(s_0,a\in A_2)$. Namely, the MaxEnt-optimal policy still prefers the wrong action range $a\in A_2$. Note that this scaling is why in theoretical analysis of MaxEnt the temperature $\alpha$ is typically omitted (original SAC paper [Haarnoja, 2018a], remark after Eq(1)). All experiments in Section 6 are run with SAC with small $\alpha$ and autotuning (Line 923 Table 2). We will update the paper to emphasize these points. Thank you for pointing out the concern. **Q: Complex Dynamics?** > - When a more realistic dynamics model for the quadrotor is used... — the term more realistic is vague and hand-wavy. > - The selected benchmarks seem to be arbitrarily selected and complex dynamics is used without any justification of what is complex. **A:** The difference in the dynamics of quadcopters between easy and complex is described in Appendix C.2 and we will further add the full equations. We agree with the reviewer that it is hard to quantify complex dynamics, so we should refer to them as environments that require low-entropy control policies to succeed, which can be observed more directly by plotting the advantage landscapes as in Section 6.1. The benchmarks are selected for visualizing concretely *when* entropy affects policy updates, as shown in Section 6. We also showed that the misleading effect can also benefit learning (Section 6.2). Our goal is to understand when MaxEnt helps and when it hinders policy optimization. **Q: No performance issues in SAC?** > - The works cited in this paper do not give conclusive evidence of MaxEnt algorithms always struggling in important control problems. > - Tan \& Karakose (2023) are cited to claim that SAC delivers suboptimal solutions in comparison to PPO in complex control problems, which is not shown in those papers. **A:** We do not aim to show that MaxEnt algorithm *always* struggles, but to understand when the entropy terms *may* hinder learning. We definitely agree that SAC is widely observed to outperform policy gradient, which makes it important to understand in what cases they may underperform. The cited practical robotics papers are examples of such cases. In particular, Tan \& Karakose (2023) shows in Figure 6 that SAC is outperformed by other methods. We can remove the reference if it is not a convincing example. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It is still unclear to me how the analysis in this paper that compares SAC v/s PPO clearly disentangles the performance deficit in Vehicle, Quadrotor, and OpenCat to the fact that SAC has entropy regularization. In fact, there are easy ways to test this: 1. DDPG should outperform SAC in these environments, because DDPG is not misled by entropy. However, as shown in Figure 11, there is no performance difference and PPO >> DDPG ~ SAC. So, clearly the difference in SAC and PPO cannot be attributed to the presence of entropy. 2. PPO with entropy should outperform PPO without entropy. As Table 1 states, the PPO experiments are with entropy coefficient = 0. However, one could easily experiment with larger entropy coefficients (like 0.2) and analyze the Q-function curves learned by PPO. While PPO does not learn a "soft" Q-function, this objective would still lead to a high entropy actor which would fail in tasks that require precise, low-entropy policies — as claimed in this paper. Overall, I don't find the experiments convincing to show that the real reason for the performance difference in these environments is, in fact, due to the presence of entropy regularization in SAC, as opposed to other differences between the algorithms. I understand and appreciate that entropy-regularized learning in SAC would lead to a final policy that must balance between the environment reward and entropy maximization, which means it cannot be fully optimal — both intuitively and theoretically, I agree this is true. But, whether this difference has an impact on the environments considered in this paper is not justified. --- Reply to Comment 1.1.1: Comment: Thank you for your response and additional questions. > I understand and appreciate that entropy-regularized learning in SAC would lead to a final policy that ... cannot be fully optimal both intuitively and theoretically, I agree this is true. Thank you for reading our rebuttal. We are glad that it clarified your previous questions, and that our main results are now clear. We now provide explanations on the experiments. > I don't find the experiments convincing to show that the real reason for the performance difference in these environments is, in fact, due to the presence of entropy regularization in SAC, as opposed to other differences between the algorithms. We completely agree, which is why the experiments are not used to explain the differences between these RL algorithms. Instead, we focused on evaluating the practical relevance of our theoretical claims, by answering the following questions: **(A)** In practice, can MaxEnt produce value landscapes that mislead policy optimization? **(B)** If so, how does using vs. not using MaxEnt objectives affect the behavior of SAC in practice, assuming all other components of the algorithm are fixed? We addressed (A) in Section 6.1 by showing that in several control environments, SAC generated soft-Q landscapes that misled policy centers to actions that lead to failure (Figure 5 and 6). We addressed (B) in Section 6.3 by formulating SAC-AdaEnt which only switches out soft-Q values when the discrepancy between soft-Q and plain-Q values is large. By doing so, the problematic action choices shown in (A) were corrected (Figure 9). The reason that we compare with PPO is not to explain its difference with SAC. Instead, given the fact that PPO performs well on these environments, it gives us good action choices that can successfully control the agents. Then by plotting the actions from PPO and SAC on the soft-Q value landscapes, we confirm the misleading effects. In the main text (Figure 4), we have not compared with DDPG or other algorithms, again because the goal is not to compare performances. The learning curves are used to confirm the overall impact of critical action choices, since Figure 5 and 6 can only show concrete states. We respond to your two specific questions as follows. > 1. DDPG should outperform SAC in these environments, because DDPG is not misled by entropy... So, clearly the difference in SAC and PPO cannot be attributed to the presence of entropy. DDPG and SAC differ substantially. DDPG uses a deterministic actor policy, explores by adding noise rather than sampling from policy distributions, uses a single Q-network, and updates the policy using raw Q-network gradients. SAC improves DDPG systematically using the MaxEnt framework that typically results in superior performance, leading to general acceptance of the effectiveness of SAC/MaxEnt. Our work aims to alert that *while entropy offers major benefits, it is not without flaws*, complementing existing work to better understand the mechanisms behind its potential limitations. This is not an argument that the drawbacks of entropy outweigh its advantages. We do not claim that entropy’s misleading effects are the sole reason that policy optimization may struggle in challenging environments, nor do we advocate reverting to DDPG or removing entropy from SAC altogether. In fact, we have hypotheses about why DDPG may perform poorly in these environments, despite not being affected by entropy. While entropy may be misleading, it smoothes the value landscape, helping align Q-network gradients with policy improvement. In contrast, DDPG relies on raw input gradients from highly non-convex Q-networks, which can misguide policy optimization even more erratically. This issue is beyond the scope of our paper, but we could add a remark for clarity. > 2. PPO with entropy should outperform PPO without entropy. While PPO does not learn a "soft" Q-function, this objective would still lead to a high entropy actor which would fail in tasks that require precise, low-entropy policies — as claimed in this paper. We are happy to see the reviewer applying our intuition and results to this setting. As discussed, our technical results address the specific use of entropy in the MaxEnt objectives. Analyzing entropy in exploration requires different methods: as you (and Reviewer Rq6M) noted, it does not distort the Q-landscape but still hinders concentration on low-entropy policies because of exploration, rather than misleading at convergence like MaxEnt. > But, whether this difference has an impact on the environments considered in this paper is not justified. We hope the clarifications above help with a re-examination of our experiments. To summarize, we did not intend to explain the differences between SAC and other algorithms, but to evaluate the practical relevance of our theoretical analysis, by addressing (A) and (B) above through Figures 5 to 9 and the SAC-AdaEnt algorithm.
null
null
null
null
null
null
PepTune: De Novo Generation of Therapeutic Peptides with Multi-Objective-Guided Discrete Diffusion
Accept (poster)
Summary: The authors propose a framework based on the Masked Discrete Language Model (MDLM) (PepTune) to generate and optimize therapeutic peptides. They claim that the main contributions of their model are: 1) using MDLM to generate peptide SMILES representations, 2) introducing NELBO and reverse-posterior, 3) using Monte Carlo tree search to guide the generation of SMILES, and 4) training a set of classifiers and regressors for multiple properties. Claims And Evidence: The source code is not provided. The experiments are incomplete, and there is no comparison with other baseline models, which makes the paper unconvincing. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: Using MDLM to generate peptide SMILES representations and introducing NELBO and reverse-posterior Essential References Not Discussed: Wang Y, Liu X, Huang F, et al. A multi-modal contrastive diffusion model for therapeutic peptide generation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(1): 3-11. Other Strengths And Weaknesses: Strengths: 1) This paper is well presented and well-written. 2) The appendix provides clear supplementary material for better understanding. 3) Therapeutic peptide generation is a challenging and novel area. Weaknesses: 1) The paper lacks significant novelty. The authors claim to use NELBO as a loss function, but this is not a new approach, as many prior works have already utilized similar loss functions, such as in Bartosh et al. (2023) [1]. MDLM itself was proposed last year in Sahoo et al. (2025) [2]. Furthermore, MCTS has been used for calculating rewards in discrete models since 2017, as demonstrated by Yu et al. (2017) [3], and there are several papers that have applied MCTS to diffusion models, such as Yoon et al. (2025) [4]. 2) The paper lacks essential comparisons with baseline models, such as Wang et al. (2024) [5]. Additionally, the authors did not provide the necessary resource code, which undermines the credibility of the model. 3) The experimental results are incomplete and do not outperform existing models. The experiments only measure two statistical metrics (validity and uniqueness), and the novelty metric is not provided. Moreover, the results do not surpass those of Data and PepMDLM, indicating that the proposed method does not offer significant improvements over such models. Other Comments Or Suggestions: None Questions For Authors: See Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your insightful review. **Essential References Not Discussed:** We appreciate the reviewer’s suggestion to consider Wang et al. (2024). Although the title may suggest a close relation to our work, the scope, modeling assumptions, and core goals differ significantly. MMCD generates peptides composed of 20 natural amino acids only using a multi-modal approach. While MMCD uses the label “therapeutic peptide generation,” its evaluation is limited to docking scores and embedding-level metrics, without any testing of therapeutic properties such as solubility, hemolysis, or non-fouling. In contrast, our work directly targets real-world therapeutic peptide design by expanding the chemical space to include non-canonical residues and cyclic structures and by incorporating classifier-guided objectives tied to **experimentally actionable properties**. This direction aligns closely with the goals of the **Application-Driven Machine Learning Track**. Given these fundamental differences in scope and design goals, a direct comparison is not appropriate. **Addressing Weaknesses:** **Weakness 1:** We would like to push back on the assertion that our paper lacks significant novelty. Although NELBO is used in previous works, we derive a **unique** case of the NELBO given our novel bond-dependent masking schedule (See Appendix D.3). While the MDLM was already introduced by Sahoo et al., we make a novel extension with a bond-dependent masking schedule and derive a reverse posterior that differs from the general case in Sahoo et al. (See Appendix D.1-D.2). In addition, we introduce a novel invalidity loss that penalizes invalid peptide structure during training. Finally, despite the use of MCTS in prior discrete models, our paper introduces the first application of MCTS to diffusion models, with our method being published in *arXiv* before Yu et al. (2025). In addition, our submission date to ICML was 9 Jan 2025, which was before the publication of Yu et al. (2025) on 11 Feb 2025. **Weakness 2:** The baseline mentioned (Wang et al.) is designed to generate peptides consisting only of the 20 natural amino acids, while the core innovation of PepTune is to expand the search space to peptides with modified amino acids and cyclicizations which has not been achieved by prior models, which has been shown to enhance various therapeutic properties. These are two fundamentally different problems that are difficult to compare. Furthermore, the motivation behind the key components of our architecture, including our bond-dependent masking schedule and invalid penalty loss, stems from our unique goal of generating valid peptides from SMILES tokens despite the increased granularity. Our source code is included in the arXiv version of the manuscript; however, given the anonymity of the review process, please refer to the anonymous repo link: [**https://anonymous.4open.science/r/peptune-86CB/README.md**](https://anonymous.4open.science/r/peptune-86CB/README.md). **Weakness 3:** We believe you may have misinterpreted the comparisons in Table 1. PepMDLM is ***our*** model trained with our bond-dependent masking schedule only without MCTS guidance. We make this comparison to demonstrate that our MCTS-guidance strategy does not significantly lower the diversity of sequences and increases validity through our iterative MCTS-guidance algorithm that rewards valid expansion steps. Given that most of your concerns are regarding clarity and not the methodological and experimental soundness of our paper, and we have addressed your doubts on the novelty of our approach, we hope you consider raising your score in response to our answers. We will plan to introduce clarifications into the main text for the camera-ready version. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response. They have clearly addressed most of my concerns: 1) their work was conducted earlier than that of Yu et al. (2025). 2) the source code has been provided, and the difference between their work and the baseline (Wang et al 2024) are clearly articulated. 3) the question regarding the "Novelty" metric (i.e., the proportion of generated novel samples that do not appear in the training set) remains unaddressed. Anyway, as the authors have adequately resolved the first two issues, I am willing to raise my score. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your time in reviewing our paper. Regarding the "Novelty" metric: given that there are 11 million peptides in our dataset, comparing against all sequences in the dataset is not feasible, so we report the similarity to nearest neighbor (SNN), which takes the maximum Tanimoto similarity score for each generated sequence against ~100K sequences in the training dataset (=1 for two identical sequences). We found an SNN of 0.513 for PepMDLM and 0.486 for PepTune (compared to 0.975 for HelmGPT), indicating that our generated sequences are novel and not represented in the dataset. Thank you for your support!
Summary: This paper presents PepTune, a Masked Discrete Language Model (MDLM)-based generative framework for generating new peptides. Unlike existing approaches that struggle with multi-objective optimization or rely on continuous approximations, PepTune operates natively in a discrete sequence space while optimizing for properties such as binding affinity, membrane permeability, solubility, hemolysis, and non-fouling. The key contributions of the paper are: 1. Introduction of Bond-Dependent Masking Schedule during training and inference. 2. Introduction of Invalid Loss Function: A penalty term that discourages the generation of invalid SMILES sequences. 3. Multi-Objective MCTS Guidance: Classifier-based rewards steer sequence generation to ensure Pareto-optimal solutions across conflicting therapeutic properties. The paper uses several real case studies to demonstrate the efficacy of the proposed method. Claims And Evidence: 1. Token-dependent masking and unmasking help in the generation of peptides. There is not direct evidence supporting this claim in isolation. Idealy, PepMDLM should have been compared with vanialla MDLM for this. 2. The proposed MCTS-based decoding procedure improves pareto optimality w.r.t multiple objectives. This claim is supported using the results in Table 1 PepMDLM vs PepTune. However, it would be good to have comparisons to simple existing baselines for guided discrete diffusion for single objective guidance like Gruver et. al. (2023) and Nisonoff et. al. (2024), and them compare with them even for multi-objective guidance by making appropriate extensions. Methods And Evaluation Criteria: Currently, the paper only presents results for two versions of their own method: PepMDLM and PepTune, where both methods only differ in the inference procedure. Since the paper uses multiple modifications for training as well as inference over vanilla MDLM, it is important to show ablations for each one of them. 1. Comparing PepTune just with PepMDLM is not sufficient. Before going to the case studies, it would be good to have comparisons to simple existing baselines for guided discrete diffusion for single objective guidance like Gruver et. al. (2023) and Nisonoff et. al. (2024). Theoretical Claims: I could not check the correctness of the proof for equation (3) because I had some trouble understanding the notation. Please see the questions and the comments section below. Experimental Designs Or Analyses: Please see the "Claims and Evidence" section above. Supplementary Material: I reviewed the results (tables and figures) that are placed in the appendix and are referenced in the main paper. Relation To Broader Scientific Literature: The paper adds a promising approach for scalable inference time guidance for discrete diffusion models using MCTS. To the best of my knowledge, the use of MCTS-like algorithm for sampling from a discrete diffusion model is novel and has not been explored before. Essential References Not Discussed: Please discuss how the state-dependent training objective used in this paper is different (if it is different) from Shi et. al. (2024). Other Strengths And Weaknesses: One of the major strengths of the paper are the real case studies. However, some sections of the paper are not as clear (see comments and questions). Moreover, the paper can be improved by including more baselines (see Claims & Evidence and Methods & Evaluation sections above). Other Comments Or Suggestions: 1. Line 058, column 2: $\mathcal V$ is used without specifying its meaning, and at this point in the paper it is unclear what is meant by "takes the vector encoding of the token $\mathbf x_0$". Isn't bold-faced $\mathbf x_0$ a sequence? There are a couple of missed words around the equation, which makes it an awkward read: "where $b$ represents _the with_ ones..". Also, what is $w$ ? I'm assuming it is a fixed scalar greater than $1$, but this should be mentioned. 2. In equation 3, what is the meaning of $(...)^T x_0 x_0$? The left multiplier is transposed, so it must at least be a vector, but what is $x_0 x_0$? Similary, what is the meaning of $x_0 m$? While I could still understand equation 1, in spite of undefined symbols. I can't follow equation 3 at all. 3. The colors and fonts used in the main figure of the paper (Figure 1) make it extremely difficult to read. I'm unable to read the small fonts even after zooming in quite a bit. 4. I think there should be a $log$ on the left-hand side of equation 11. Questions For Authors: 1. In equation 3, what is the meaning of $(...)^T x_0 x_0$? The left multiplier is transposed, so it must at least be a vector, but what is $x_0 x_0$? Similary, what is the meaning of $x_0 m$? 2. What is $K$ in $\tilde x^{(l)} \in \{0,1\}^K$? 3. Why is the validity percentage for PepTune much lower than HELM-GPT? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed suggestions. **Claims and Evidence:** - We conduct an ablation study as suggested. From each model, we sampled 100 sequences of token length 100 and checked validity with our SMILES2PEPTIDE decoder. We show that both the bond-dependent masking and invalid loss improve the validity of peptides. | **Model** | **Fraction of Valid Peptides** | | --- | --- | | PepMDLM | 0.40 | | PepMDLM + No Bond Dependent Masking | 0.16 | | PepMDLM + No Invalidity Loss | 0.21 | - Due to the significant compute time needed to train additional baseline models on our 11 million peptide dataset, we decided it was not feasible to do so in this rebuttal period. Moreover, we note that Gruver et al. evaluate at most two objectives and require performing guidance updates in the latent embedding space based which can result in invalid conversion back to a discrete sequence, and Nisonoff et al. is limited to single-objective guidance. To the best of our knowledge, PepTune is the first framework for classifier-based **multi-objective guidance** for **discrete diffusion**, overcoming previous limitations via reward-based tree expansion and Pareto-aware backpropagation. **Essential References Not Discussed:** Shi et al. leverage a per-token parameter $w$, which is trained in parallel with the discrete diffusion model. In contrast, we fix $w=3$ for tokens that exist within a peptide bond to ensure the model learns to reconstruct them earlier. This construction presents a practical extension of state-dependent masking to preserve peptide structure, which could be further explored for preserving various other structured data types. In addition, we rederive the state-dependent NELBO loss in a fundamentally different manner than Shi et al. (See Appendix D.3). **Other Comments or Suggestions:** We thank you for the thoughtful suggestions. We have made the notations and figures clearer upon your request in the manuscript for the camera-ready version. 1. We notice the inconsistencies in notation that you refer to, but in Equations 1-4, we simplify the notation so that $\mathbf{x}_0$ refers to the one-hot vector of the true token at an arbitrary position in the sequence. We have added clarification and removed the $(\ell)$ superscript in the other equations to maintain consistency. 2. Yes, $w$ is a scalar greater than one. Please refer to the last sentence of Section 2.1, where the exact value is specified: “Empirically, we found that $w=3$ increased peptide validity while maintaining diversity across generated samples.” 3. The sentence is fixed to “where $\mathbf{b}$ is the vector with ones at indices of peptide bond tokens and zeroes in remaining indices.” 4. We have increased the font size for clarity and fixed Equation 11 to include the $\log$. **Answers to Questions:** 1. In equation 3, the first $\mathbf{x}_0$ is a one-hot vector is used to extract the probability associated with the unmasking the true token by taking an inner product. The second $\mathbf{x}_0$ indicates that if the current token $\mathbf{z}_t$ is a mask token, it has a $q(\mathbf{z}_s=\mathbf{x}_0|\mathbf{z}_t=\mathbf{m}, \mathbf{x}_0)$ probability of being unmasked to $\mathbf{x}_0$ and a $q(\mathbf{z}_s=\mathbf{m}|\mathbf{z}_t=\mathbf{m}, \mathbf{x}_0)$ probability of remaining masked. For clarity, we have changed the transpose to an inner product: $$ q(\mathbf{z}_s|\mathbf{z}_t, \mathbf{x}_0)=\begin{cases} \left\langle \left(\frac{s}{t}-\frac{s^w}{t^w}\right)\mathbf{b}+\frac{t-s}{t}\mathbf{1}, \mathbf{x}_0\right\rangle \mathbf{x}_0+\left\langle\left(\frac{s^w}{t^w}-\frac{s}{t}\right)\mathbf{b}+\frac{s}{t}\mathbf{1}, \mathbf{x}_0\right\rangle\mathbf{m}&\mathbf{z}_t=\mathbf{m}\\\\ \mathbf{z}_t&\mathbf{z}_t\neq \mathbf{m} \end{cases} $$ 2. $K$ represents the token vocab size. $\tilde{\mathbf{x}}\in \{0, 1\}^K$ denotes a $K$-dimensional one-hot vector where the index of the token is indicated with 1. 3. In HELM-GPT, the peptides are represented in HELM notation [1], where each token is a valid, synthesizable amino acid. In contrast, PepMDLM is trained on SMILES tokens that decompose amino acids into smaller tokens that can be pieced together into valid amino acids during generation. While this enables us to represent a greater diversity of non-natural amino acids, even a single token generated in the wrong position can result in an invalid peptide sequence, resulting in a lower validity rate. However, we note that our MCTS guidance strategy increases the validity rate to 100% due to its iterative unmasking process that is rewarded on high-scoring and valid peptides. We appreciate the reviewer’s constructive feedback and hope the new clarifications, ablations, and improved notation address your concerns. Given the novelty, empirical strength, and thorough revisions, we hope you will consider raising your score to demonstrate your support for our work.
Summary: The paper proposes a new model for novo generation and optimization of peptide. The model operates on SMILES and performs multi-objective-guided discrete diffusion; it can handle non-canonical amino acids and cyclic modifications. PepTune employs a bond-dependent masking schedule. To guide the diffusion, PepTune uses Monte Carlo Tree Search. The model is evaluated on different targets, e.g, generation of peptide binders for the GLP-1 receptor and bispecific binders for TfR and GLAST; PepTune produces peptides with better predicted properties compared to existing therapeutics. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: n/a Experimental Designs Or Analyses: yes Supplementary Material: quick pass over the supplementary; it is well detailed but a bit long for a standard conference paper. Relation To Broader Scientific Literature: The paper introduces a sequence-based approach for peptide generation based on smiles and using a masking schedule that depends on the bonding to ensure that e.g., peptide bonds, side chains are preserved during the diffusion process; the model generates sequences that are more likely to be chemically valid and optimize multiple properties. As such few existing models can model non canonical amino acids and generate cyclic peptides. Essential References Not Discussed: the authors do not detail much the related work; it would be helpful to have a separate related work section that is more detailed than the current introduction paragraph. for example, the authors could discuss more how multi property optimization is done for other molecular tasks e.g. for proteins; Accelerating Bayesian Optimization for Biological Sequence Design with Denoising Autoencoders - Stanton et al. Other Strengths And Weaknesses: * The paper presents a new combination of masked diffusion with MCTS for multi-objective optimization in discrete sequence spaces. This addresses the challenges with guiding discrete diffusion processes without gradient estimation. * The bond masking schedule is well suited to the peptide SMILES representation as it helps preserve structural elements of peptides. * PepTune successfully optimizes multiple properties simultaneously. * The paper includes extensive experiments and case studies, such as the generation of GLP-1R binders and bispecific peptides. Weakness * it seems like the generation process is not conditioned on the target, which if I understood correctly is only involved through the affinity oracles. as such the generation process can be wasteful as the generated molecules are not conditioned to the target of interest. Moreover, trained affinity oracles are known to generalize poorly out-of-distribution. As such, although target conditioned generation is possible, there might be several caveats with it. Other Comments Or Suggestions: see questions Questions For Authors: * if model trained on valid peptides, why would it sample invalid peptides ? * which size of peptides can PepTune handle ? * can PepTune be adapted to other types of molecules, e.g. proteins ? * can the authors clarify what they mean with de novo ? it looks like the model would still need to have the targets to be in distribution for the affinity oracles to work; as such the model might not perform correctly under a more challenging train/test split with OOD test targets which is usually qualified as "de novo generation"; as such the current terminology can be misleading. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are very grateful for the reviewer’s thoughtful review. **Essential References Not Discussed:** We request that the reviewer refer to Appendix C, which provides an extensive analysis of prior discrete diffusion and classifier-based and classifier-free guidance methods. We also discuss at the end of Appendix C.3 the limitations of prior discrete guidance methods and challenges in multi-objective guidance. **Addressing Weaknessses:** To ensure that our model can scale to an arbitrary number of objectives, we leverage a classifier-based guidance approach which doesn’t take the protein sequence explicitly as input to the diffusion model but instead injects the target protein information at each denoising step by computing multi-objective rewards that are back-propagated through the MCTS tree. This means that unmasking steps that produce high-scoring peptides are explored further, limiting the “wasted” peptides. We also note that this exploration method enables us to generate a batch of Pareto-optimal sequences in a single run of the algorithm, which is much faster than generating only one sequence per run like prior models. In addition, we show that our binding affinity and property classifiers generalize well on the validation data (Table 6 and Figure 14). We have to accept that only limited protein-peptide binding quantification data is available, and downstream in silico validation like VINA docking can be used to narrow down candidate peptides to be synthesized for experiments. **Answers to Questions:** - PepMDLM is trained on SMILES tokens that decompose amino acids into smaller tokens that can be pieced together into valid amino acids during generation. While this enables us to represent a greater diversity of non-natural amino acids, even a single token generated in the wrong position can result in an invalid peptide sequence, resulting in a lower validity rate, which we calculate by decoding the SMILES sequences with regular expression matching. However, we note that our MCTS guidance strategy increases the validity rate to 100% due to its iterative unmasking process that is rewarded on high-scoring and valid peptides. - For the experiments, we generate peptides of token length 50 (7 amino acids) up to 200 (30 amino acids). This aligns with existing therapeutic peptides in clinical use, where the average length is about 20 amino acids or less [1]. - Yes, our MCTS-guidance algorithm is a modular multi-objective guidance framework that can be adapted for any type of discrete data (e.g. protein sequences, promoter and enhancer DNA, RNA, small molecule drugs, natural language, etc.) and any pre-trained classifier in a plug-and-play fashion. Furthermore, our bond-dependent masking schedule can be extended to preserve structural components of other data types (e.g. functional motifs in proteins, punctuation and grammar in natural language, etc.). - In our work, we use “*de novo*” to refer to the generation of novel peptide sequences from scratch—i.e., not based on templates or fragments from known binders—and guided by pre-trained classifiers that predict binding affinity, solubility, and immunogenicity of these peptides. The model does not rely on known peptide backbones or motifs but instead generates sequences purely from noise through discrete diffusion sampling. Furthermore, the target protein is not restricted to the training set of the classifier. Both the target protein and de novo peptide binder sequence are converted into feature-rich embeddings and passed through a classifier trained to understand binding patterns for accurate prediction. Our classifiers generalize well across unseen sequences, as shown by the high validation Spearman correlation coefficients (Table 6 and Figure 14). We appreciate the reviewer’s thoughtful feedback and hope the clarifications provided demonstrate the uniqueness and adaptability of our approach. Our response highlights how PepTune addresses key limitations in prior work through a scalable, modular guidance framework, efficient multi-objective generation via MCTS, and strong generalization to arbitrary discrete data types and guidance objectives. Given our detailed answers, we hope you can consider raising your score to demonstrate continued support for our work. [1] Dougherty, Patrick G et al. “Understanding Cell Penetration of Cyclic Peptides.” *Chemical reviews* vol. 119,17 (2019): 10241-10287. doi:10.1021/acs.chemrev.9b00008
Summary: The paper proposes PepTune, a discrete diffusion model that enables multi-objective guidance for generating and optimizing therapeutic peptide SMILES. PepTune adapts a bond-dependent masking schedule and global sequence invalid loss to improve discrete diffusion performance on peptide, and uses an MCTS-based strategy to guide the masked discrete diffusion generation and demonstrates good results for several therapeutic properties. Claims And Evidence: Most claims in the submission are supported by convincing evidence. However, there are several gaps in the proposed method and experiments to support the novelty and performance of this paper. Please refer to details in "Methods and Evaluation Criteria" and "Experimental Designs or Analyses" sections. Methods And Evaluation Criteria: There are two major components of the proposed method: 1. PepMDLM that modifies the MDLM method with prior knowledge and inductive bias of the peptide data distribution; 2. MCTS based guided sampling for multi-objective design. The datasets and evaluation criteria for PepMDLM look fine, while for guided sampling, only case studies are applied without a general rigorous evaluation. Also, commonly used criteria for Pareto optimization, eg, the hypervolume metric, should be calculated. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: - For PepMDLM, an ablation study is missing for the importance of each component (i.e., bond-dependent masking schedule, bond-dependent diffusion loss, invalid peptide loss, etc.). - In Table 5, the validity of HELM-GPT is much higher than PepMDLM. The author claims that validity is assessed differently. Could the author clarify this more and how this affects the validity comparison with HELM-GPT? - The effectiveness of the guided sampling part is majorly demonstrated using case studies. How are these case study settings chosen? It is helpful if the authors could provide some evidence to support that these case studies offer a general and reliable evaluation for the proposed method that is applicable to a broad range of cases without cherry picking. - For guided sampling, there is no baseline model other than the pretrained PepMDLM that is compared, for example the simplest best-of-N approach with N setting as a value comparable to the computational cost in the MCTS sampling, as well as guided sampling approaches, eg. [1]. - For each case study, the paper only shows the distribution of each objective individually. It would be helpful to draw the Pareto frontier of the generated peptides to better demonstrate how the model output balances different objectives and explore the Pareto frontier compared to other baselines. - An analysis of the sampling cost of MCTS with respect to different hyperparameter choices and how the model performance scale with the sampling cost is helpful [1] Paretoflow: Guided flows in multi-objective optimization. ICLR 2025. Supplementary Material: I review all parts of the supplementary material. Relation To Broader Scientific Literature: The paper is related to: machine-learning wise, discrete diffusion models, guided sampling; and biological application wise, de novo peptide design. Essential References Not Discussed: - Multi-objective guided flow models: [1] Paretoflow: Guided flows in multi-objective optimization. ICLR 2025. Other Strengths And Weaknesses: The paper has good domain-specific designs in the method for peptide and evaluates the method in many real-world use cases, which have good potential to be applied in real-world peptide design problems. My main concern in this paper is the lack of baselines, benchmarks, and ablation studies in the evaluation of the method apart from the case studies. Other Comments Or Suggestions: NA Questions For Authors: - In Table 1, for PepTune, what are the rewards/properties optimized by the MCTS, and how are they related to the values reported in Table 1? They seem to be some general metrics that are irrelevant to the reward and thus cannot measure how well the rewards are optimized by the model. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. First, we would like to respectfully disagree with the assertion that the guidance strategy is inadequately evaluated. Given that the paper is on **Application-Driven ML Track**, we chose case studies that provide robust evaluation to prove efficacy in peptide-based therapeutic design. This is far more impactful and applicable to society and medicine than classical benchmarks. We ask that the reviewer reconsider their review of the paper with this in mind. **Addressing Concerns Experimental Designs Or Analyses:** - We conduct an ablation study as suggested. From each model, we sampled 100 sequences of token length 100 and checked validity with our SMILES2PEPTIDE decoder. We show that both the bond-dependent masking and invalid loss improve the validity of peptides. | **Model** | **Fraction of Valid Peptides** | | --- | --- | | PepMDLM | 0.40 | | PepMDLM + No Bond Dependent Masking | 0.16 | | PepMDLM + No Invalidity Loss | 0.21 | - In HELM-GPT, the peptides are represented in HELM notation [1], where each token is a valid, synthesizable amino acid. In contrast, PepMDLM is trained on SMILES tokens that decompose amino acids into smaller tokens that can be pieced together into valid amino acids during generation. While this enables us to represent a greater diversity of non-natural amino acids, even a single token generated in the wrong position can result in an invalid peptide sequence, resulting in a lower validity rate. However, we note that our MCTS guidance strategy increases the validity rate to 100% due to its iterative unmasking process that is rewarded on high-scoring and valid peptides. - We selected three target classes to demonstrate PepTune’s robustness: (1) proteins with known binders to benchmark against known binders, (2) proteins without known binders, and (3) dual-target cases. This is a rigorous evaluation for application in therapeutic peptide design. While previous guidance methods for discrete diffusion are limited to optimizing 1-2 objectives, we evaluate guidance across 4-5 non-trivial objectives that are highly relevant to therapeutic peptide design. - We appreciate the reviewer’s suggestion to conduct best-of-N comparison. However, we believe that our comparisons between PepTune and PepMDLM are sufficient in demonstrating the efficacy of our guidance strategy for the following reasons: - In Pareto optimization, scalarizing or ranking sequences across conflicting objectives without access to the Pareto frontier leads to suboptimal trade-offs, making it difficult to rank best-of-N. - Despite exploring more sequences, PepTune achieves comparable runtime by reusing high-reward unmasking paths within the MCTS tree. Rather than starting from scratch, PepTune expands only unexplored nodes, reducing redundant computation. Empirically, we find similar runtimes for generating 100 *valid* sequences using PepMDLM (403 sec) and PepTune (347 sec for top 100 sequences with $M=10$ and $N_{\text{iter}}=128$), while PepTune yields substantially higher property scores (Figures 4–9). - We also appreciate the suggestion to conduct a benchmark against ParetoFlow. However, ParetoFlow is designed for continuous flow-matching models, whereas our approach is designed specifically for discrete diffusion. Due to the fundamental differences in model class and data type, a direct comparison would not be meaningful; to our knowledge, PepTune is the first to apply Pareto-aware guidance for discrete diffusion, and we hope this novelty is considered. - Given that our method supports robust guidance beyond 2-3 objectives, it is impossible to visualize the high-dimensional Pareto frontier. Instead, our graphs demonstrate that our approach simultaneously raises the scores across 5 objectives (see Figure 9) to a plateau, indicating that no further optimization is possible without sacrificing another objective. - Each MCTS run explores $M\times N_{\text{iter}}$ sequences. The sampling cost scales linearly with respect to both of these parameters. We compared $M=\{10, 50, 70, 100\}$ and found similar guidance curves and set $N_{\text{iter}}=128$ constant. **Responses to Questions** - In this experiment, we optimized 5 properties: binding affinity to GFAP, membrane permeability, solubility, hemolysis, and non-fouling, all of which are not related to the values in Table 1. In Table 1, we aim to show that our MCTS-guidance algorithm does not significantly lower the diversity of sequences and increases validity through our iterative MCTS-guidance algorithm that rewards valid expansion steps. We thank you for your suggestions and hope you reconsider raising your score in light of these responses and the broader goals of the Application-Driven ML Track. [1] Zhang et al. Helm: A hierarchical notation language for complex biomolecule structure representation. http:[//dx.doi.org/10.1021/ci3001925](https://dx.doi.org/10.1021/ci3001925) --- Rebuttal Comment 1.1: Comment: The authors address most of my concerns, and I will increase my score. Regarding Best-of-N, I understand the difficulty in ranking samples, but I suggest the authors try an alternative by dropping samples that are Pareto inferior to some other sample and report the number of samples left and their performance on different properties. This will give a much better idea of how a naive approach would work for the multi-objective optimization than only comparing with PepMDLM, which has no access to the target property. Besides, in addition to the running time, I would suggest adding a table of the NFE (number of functions evaluated) for PepMDLM, PepTune, and the naive Best-of-N like baseline mentioned above. --- Reply to Comment 1.1.1: Comment: We deeply appreciate your detailed suggestions, and we will try our best to include the best-of-N and NFE table in our camera-ready version. Thank you for your support!
null
null
null
null
null
null
Generative Modeling Reinvents Supervised Learning: Label Repurposing with Predictive Consistency Learning
Accept (poster)
Summary: Traditional methods predict labels directly from data inputs, but recent tasks often involve more complex labels. To address this, the authors propose Predictive Consistency Learning (PCL), inspired by consistency training in generative models. Unlike traditional approaches, PCL incorporates noise-perturbed labels as additional references, ensuring consistency across varying noise levels. It learns the relationship between latent features and a range of label information, enabling progressive learning and multi-step inference akin to gradual denoising, thus improving prediction quality. Experiments on vision, text, and graph tasks demonstrate the superiority of PCL. Claims And Evidence: The authors have demonstrated the effectiveness of their method by testing it across a variety of settings, including n-body simulation, semantic segmentation, and next-token prediction. These experiments highlight the versatility of PCL. Methods And Evaluation Criteria: The proposed methods and evaluation criteria generally make sense. Theoretical Claims: This paper does not make theoretical claims. Experimental Designs Or Analyses: The proposed multi-step prediction approach brings up concerns regarding its efficiency. How much longer does the training process take compared to standard supervised learning? Additionally, what about the inference time? The authors may provide this information. Supplementary Material: The authors did not provide supplementary material. Relation To Broader Scientific Literature: Label smoothing involves applying some form of “perturbation” to the labels to help the model generalize better. How much of a performance advantage does PCL offer over label smoothing? The authors may report the results on a small dataset. [1] Rethinking the Inception Architecture for Computer Vision. CVPR 2016 Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: Given that it requires multiple forward passes for inference, how does its performance compare to that of ensemble methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the valuable comments, nice suggestions, and for acknowledging our work. Below we respond to your specific comments. > **Q1: The proposed multi-step prediction approach brings up concerns regarding its efficiency. How much longer does the training process take compared to standard supervised learning? Additionally, what about the inference time?** For training, since the loss calculation requires two inference predictions with different noise levels during training, it requires twice the training cost of the traditional supervised learning paradigm. Moreover, it typically takes less than twice the iteration for the model to converge. For inference, PCL can produce much superior predictions with merely a single forward pass as SL does. Meanwhile, PCL can achieve better performance with more forward passes. As shown in Fig. 3 using a GCN backbone on the 3,2,1 dataset, even single-step PCL inference achieves superior performance (prediction error 0.02835) compared to SL (0.03478), while additional inference steps (e.g., 5-step) can further refine results (0.02795). > **Q2: Label smoothing involves applying some form of “perturbation” to the labels to help the model generalize better. How much of a performance advantage does PCL offer over label smoothing? The authors may report the results on a small dataset.** Thanks for this valuable point. To evaluate this, we conducted experiments on the N-body simulation task (3 isolated particles, 2 sticks, and 1 hinge) using GCN as the backbone model. For label smoothing with continuous outputs, we implemented Gaussian noise injection via reparameterization: $\hat{\mathbf{y}} = \mathbf{y} + \mathcal{N}(0;\beta\mathbf{I})$ where $\beta$ controls the noise intensity and we use $\hat{\mathbf{y}}$ to calculate loss. This optimization are verified to induce better representations [1]. Then, we further finetune the trained model on exact labels to ensure precise regression. The Prediction error ($\times 10^{-2}$) results of SL, PCL, and SL_label_smoothing with various 𝛽 are as follows: | |SL| PCL| 𝛽=0.01 | 𝛽=0.03 | 𝛽=0.05 | 𝛽=0.1 | 𝛽=0.3 | 𝛽=0.5 | 𝛽=1.0 | | -| -| -| - | -| -| -| - | -| -| | Prediction Error | 3.453 | 2.795 | 3.460 | 3.461 | 3.428 | 3.418 | 3.256 | 3.258 | 3.368 | The results show that while label smoothing yields certain improvements over standard supervised learning (with the best performance at β = 0.3), it still falls short of PCL by a significant margin. Specifically, PCL achieves a prediction error of 2.795, clearly outperforming both SL and the best-performing label smoothing setup (3.256). This highlights PCL’s stronger ability to generalize through its learned iterative refinement process. [1] How Does a Neural Network’s Architecture Impact Its Robustness to Noisy Labels? NeurIPS 2021. > **Q3: Given that it requires multiple forward passes for inference, how does its performance compare to that of ensemble methods?** Thank you for the insightful question. To evaluate this comparison, we conduct experiments on the N-body simulation task (comprising 3 isolated particles, 2 sticks, and 1 hinge), using a GCN backbone. We compare PCL to a standard bagging ensemble approach, where $n$ independent models are trained on the full dataset. At inference time, predictions from the $n$ models are averaged to reduce individual model biases. The prediction error ($\times 10^{-2}$) across different numbers of inference passes (i.e., number of independent models or PCL steps) is summarized below: | #inference passes | 1 | 2| 3| 4| 5| | -| -| -| -| -| -| | PCL| 2.834 | 2.818 | 2.808 | 2.796 | 2.786 | | SL (Bagging) | 3.453 | 3.210 | 3.132 | 3.084 | 3.065 | As the results show, while bagging does provide performance improvements with more inference passes, it consistently underperforms compared to PCL. Notably, even with five inference passes, bagging does not reach the accuracy of PCL’s single-pass result (2.786 vs. 2.834), highlighting the efficiency and effectiveness of PCL’s learned iterative refinement. The computational trade-offs further emphasize PCL's advantages. Bagging incurs significant overhead—requiring training, storing, and maintaining $n$ separate models, resulting in O(n) training and memory costs. In contrast, PCL achieves its gains through a single model, keeping both training and deployment lightweight while delivering superior predictive performance. --- We hope this response could help address your concerns, and we are more than happy to address any further concerns you may have.
Summary: This paper aims to leverage more rich label information to enhance supervised learning. To achieve this goal, it proposes a new learning method called PCL (predictive consistency learning). Different from those conventional supervised learning approaches, PCL learns from both the labels as well as the noise-perturbed label information. Besides, PCL also optimizes the consistency across different noise levels to ensure the learning quality. Experiments on multiple tasks demonstrate the effectiveness of the proposed method. Claims And Evidence: Weaknesses: - There seems to be an overclaim regarding the label modeling. In supervised learning, the label tends to correspond to the annotations from crowd sourcing. The recovered noise from the label is also related to the recovering model, which is different from the label itself. Therefore, it is improper to claim that the label information is more complex than the inputs. Methods And Evaluation Criteria: Strengths: - The proposed method has been adequately evaluated on multiple tasks. The authors have also provided thorough explanations for the experimental settings and criteria. Weaknesses: - The computational cost of the proposed method should be quite expensive, as it learns to model the noise using an additional model. Therefore, it is unfair to directly compare PCL with supervised learning. Moreover, it is necessary to analyze the complexity of the proposed method. Theoretical Claims: Weaknesses: - There is no theoretical evidence for supporting the proposed PCL. Experimental Designs Or Analyses: Strengths: - The experiments on multiple datasets demonstrate the effectiveness of the proposed method, as the performance gain is satisfactory. Weaknesses: - The ablation studies are only conducted on the constrained N-body simulation task. There is no ablation studies for other tasks (semantic segmentation and supervised next-token prediction fine-tuning). - It seems that the authors have not reported the hyperparameter settings. Besides, there is no hyperparameter sensitivity analysis. Supplementary Material: Weaknesses: - There is no source code. The reproducibility is unclear. Relation To Broader Scientific Literature: Strengths: - This paper is related to supervised learning and generative models such as Diffusion. The authors have properly discussed them. Essential References Not Discussed: Weaknesses: - It seems that the references are a bit out-of-date. Most references are from or earlier than 2023. Is there any more recent related work? Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the valuable comments and nice suggestions. However, we believe there may exist some misunderstandings regarding the methodology. We regret any confusion caused by our presentation and would like to clarify the core methodology of PCL. What PCL does is to leverage the concept of consistency modeling to enhance the traditional predictive paradigms (e.g. in supervised learning), which modifies the mapping manner in training from `data input x -> full label y` to `data input x, noised label y_t (as additional hints) -> full label y`, and then enforce mapping consistency across different noise levels (controlled by t) to regulate label information content during training. The "label modeling" in the title refers to systematically modulating label information density (which is achieved by gradually adding noise to the raw labels and providing noised labels as input to make the model learn the complementary label information) and PCL does not employ an auxiliary model to predict or recover noise but rather introduces a novel training paradigm. > **Q1: There seems to be an overclaim regarding the label modeling. It is improper to claim that the label information is more complex than the inputs.** To clarify, we have not claimed that label information is universally more complex than inputs in the paper. Rather, we highlight (e.g., in L30 right) that "recent advanced scenarios involve much more complex labels" and "these challenges expose predictive bottlenecks due to the inherent complexity of transformations from features to labels, in addition to feature extraction". The label complexity is for comparison to simple categorical annotation tasks, rather than compared to input complexity. In many modern tasks, particularly those beyond traditional classification, labels and inputs can share closer dimensionality and information density, representing a significant departure from simple categorical annotation tasks. For instance, in LLM fine-tuning, both questions (inputs) and answers (labels) are natural language sequences requiring similar levels of semantic understanding and generation capability. > **Q2: The computational cost. PCL learns to model the noise using an additional model.** PCL is within the supervised learning paradigm and **does not employ an auxiliary model** to predict or recover noise. From the model perspective, it directly predicts the labels just as SL does. The mere difference lies in that it receives additional label hints as input. Thus, the additional computational overhead is minor in a forward pass. During training, since the loss calculation requires two inference predictions with different noise levels, it requires twice the training cost of the traditional supervised learning paradigm. Moreover, it typically takes less than twice the iteration for the model to converge. For inference, PCL can produce much superior predictions with merely a single forward pass as SL does. More forward passes can lead to further performance gains. As shown in Fig. 3 using a GCN backbone on the 3,2,1 dataset, even single-step PCL inference achieves superior performance (prediction error 0.02835) compared to SL (0.03478), while additional inference steps (e.g., 5-step) can further refine results (0.02795). > **Q3: The ablation studies are only conducted on the N-body simulation task. No ablation studies for other tasks.** The purpose of our experiments across three modalities is to validate the broad applicability of our method. We chose to conduct a more detailed analysis on one task to ensure the depth of our analysis. Here we supplement the ablation study of $\lambda_2$ term on the semantic segmentation task in [table](https://anonymous.4open.science/r/PCL-0F54/lambda2-segmentation.png) and we find excluding excluding the $\lambda_1$ term causes the model to fail. > **Q4: Hyperparameter settings and sensitivity analysis.** The key model-specific hyperparameters are set as $\alpha=0.2$ and $\lambda_1=\lambda_2=1$ in our experiments, with $\lambda$ values chosen for equal weighting rather than through extensive tuning. In response to your valuable suggestion, we conduct sensitivity analyses across reasonable parameter variations. All other parameters follow standard SL configurations as detailed in the paper, with all source code to be fully disclosed upon acceptance. The experimental results for hyperparameters: $\alpha$: [Table](https://anonymous.4open.science/r/PCL-0F54/alpha.png) $\lambda_1$: [Table](https://anonymous.4open.science/r/PCL-0F54/lambda1.png) $\lambda_2$: [Table](https://anonymous.4open.science/r/PCL-0F54/lambda2.png). > **Q5: Source code.** The open-source release of PCL is crucial for its impact on the research community, and we commit to open-sourcing our code upon acceptance of the final manuscript. --- We hope this response could help address your concerns, and we are more than happy to address any further concerns you may have. --- Rebuttal Comment 1.1: Comment: Thank you for your valuable explanations, which have addressed my misunderstandings and concerns. After reading the rebuttal and the comments from other reviewers, I would like to raise my score to 3. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback and for acknowledging our efforts in addressing the initial concerns. We sincerely appreciate your constructive engagement throughout this process and the valuable suggestions you have provided. We will carefully incorporate the additional explanations into the manuscript to enhance clarity and ensure that the key points are more clearly articulated. As the discussion period remains open, we warmly welcome any further feedback from all the reviewers to help us refine the paper further. Thank you again for your time and expertise.
Summary: This paper introduces a new prediction paradigm for more complex label spaces than the ones that are traditionally assumed in supervised learning. The paper draws inspiration from the "generative consistency learning" learning paradigm to produce the predictive consistency learning (PCL) paradigm, which the authors suggest is more suited to complex label spaces. The technique involves predicting the label with progressive amounts of label noise so that models can make predictions even when the ground truth labels contain varying amounts of noise, and enables multi-step denoising techniques within the PCL framework. Experimentally, the authors demonstrate PCL compared to supervised learning in a variety of settings: N-body simulation, semantic segmentation, and language modeling. Claims And Evidence: The main claim of the paper is that the proposed PCL is better suited to complex label spaces compared to supervised learning. The experimental evidence for this suggests that this is true -- in all of the settings that the authors evaluated, N-body simulation, semantic segmentation, and language modeling, PCL indeed outperforms supervised learning, and these settings all correspond to what I would think of as having more complex label spaces. Methods And Evaluation Criteria: The evaluation protocol makes sense to me. N-body simulations, semantic segmentation, and language modeling are all tasks which involve complex label spaces that are typically modeled using supervised learning. The proposed method is described clearly, although in the abstract, it was initially unclear what generative consistency models refer to -- more exposition in the abstract would help for readers who are unfamiliar with this area. Theoretical Claims: I did not carefully verify the correctness of any of the math introduced in the paper, but it seemed to make sense from my initial read. Experimental Designs Or Analyses: The experimental design seemed sound from my read-through. The N-body simulation is a graph learning problem, so the authors use appropriate supervised graph learning baselines. Similarly, the baselines were chosen appropriately for the semantic segmentation and language modeling tasks. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The work is motivated by the need for learning paradigms targeting modern complex label spaces -- the current literature often comprises techniques that are largely based on traditional supervised learning, where complex or structured label spaces are perhaps handled in an ad hoc way. Essential References Not Discussed: From what I can tell, the essential references were mostly covered. If anything, the related work seems to be missing a discussion of how this work relates to the field of structured prediction, as such label spaces seem to be a key motivating component of this work. Other Strengths And Weaknesses: **Strengths** - The proposed PCL is a novel and interesting take on the problem of learning labels from inputs, and tackles the relevant challenge of learning from more complex and structured labels. - The proposed framework is described in a clear and easy-to-follow way. - The experimental setup seems to appropriately cover complex label spaces including graphs, pixel-level segmentation, and language modeling. **Weaknesses** - I might have misunderstood, but it was unclear to me how the choice of noise distribution would impact results. Does the noise distribution have to be similar to the natural label noise distribution? - Again I might have missed this, but I can't tell what noise distribution was used for the N-body simulation problem or the language modeling task. - It's not immediately clear how "next-token prediction" is a complex label space -- if the label space is defined as predicting a single token from the vocabulary, then it seems to simply be a large label space. On the other hand, I would think that that the task of predicting entire sequences (all at once) would be a more obvious choice for a more complex structured label space, perhaps applied to a sequence to sequence learning problem. Other Comments Or Suggestions: On my first read, I found the abstract to be a somewhat confusing read. "Directly predicting labels from data inputs has been a long-standing default mechanism for label learning tasks, e.g., supervised learning" seems like an unnecessarily roundabout way to describe supervised learning -- are there other examples of label learning tasks beyond supervised learning and this work? This statement seems to imply this. Later on in the abstract, the authors mention the "generative consistency training concept in generative consistency models," which I was unfamiliar with -- this part should have somewhat more exposition. Questions For Authors: As mentioned before, my main question is about the noise distributions used in the experiments and how the choice of noise distribution impacts results. Furthermore, for complex label spaces, it might be unclear or challenging to design a realistic noise distribution -- how might such cases be handled? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the valuable comments, nice suggestions, and for acknowledging our work. Below we respond to your specific comments. > **Q1: How this work relates to the field of structured prediction.** Thanks for your insightful suggestion. Indeed, our work also handles the challenge of structured prediction, i.e., learning to predict complex, structured outputs. However, we observe that with the widespread adoption of deep neural networks, many structured prediction problems (like sequence or graph prediction) can now be effectively handled through standard feed-forward architectures trained end-to-end, often being naturally subsumed under the broader umbrella of supervised learning. While classical structured prediction methods may focus on specific output structures (e.g., trees or sequences), our approach makes no assumptions about certain structures, while being particularly effective for handling complex labels. This is why we emphasize label complexity within the supervised learning paradigm rather than through the lens of structured prediction methods. We sincerely appreciate your suggestion and will expand our discussion of related work in this field during revision. > **Q2: Does the noise distribution have to be similar to the natural label noise distribution?** The noise distribution does not necessarily need to match the natural label noise distribution, as our primary goal in Sec. 4.2 is to design a noise process that systematically degrades label information in a controlled manner. The key is to tailor the noise to the structure of the label space (whether continuous or discrete) rather than replicating real-world noise patterns. For continuous labels, we employ Gaussian noise to gradually corrupt the signal, while for discrete labels, we use categorical noise that diffuses the probability mass of one-hot vectors across classes. In cases where the discrete label space is excessively large or complex, we shift to perturbing the latent representations directly with continuous noise. These approaches comprehensively cover the spectrum of label types (continuous or discrete), which is also why we choose these tasks in experiments, which can exemplify different scenarios. > **Q3: What noise distribution was used for the N-body simulation problem or the language modeling task?** For N-body simulation, the label $\mathbf{y}\in \mathbb{R}^{p\times 3}$ where $p$ denotes the number of particles. Since the labels maintain continuous values, we adopt Gaussion noise for the noising process: $q(\mathbf{y}_t | \mathbf{y}) = \mathcal{N}(\mathbf{y}_t; \sqrt{\bar{\alpha}_t} \mathbf{y}, (1-\bar{\alpha}_t)\mathbf{I})$ where $\bar{\alpha}_t$ controls the noise level. For language modeling, since the raw token space is extremely large, we directly add noise to the latent embeddings of the labels, which are multi-dimensional continuous vectors. we adopt Gaussion noise for the noising process: $q(\mathbf{h}\_{\mathbf{y}\_t} | \mathbf{h}\_{\mathbf{y}}) = \mathcal{N}(\mathbf{h}_{\mathbf{y}_t}; \sqrt{\bar{\alpha}\_t} \mathbf{h}\_{\mathbf{y}}, (1-\bar{\alpha}\_t)\mathbf{I})$. The details can be found in Sec. 4.2. > **Q4: It's not immediately clear how "next-token prediction" is a complex label space.** Next-token prediction, while seemingly maintaining simple labels, actually operates in a complex label space due to the vast vocabulary size (often tens or hundreds of thousands of tokens) and the semantic information encoded in each prediction. Though it only predicts one token at a time, the process implicitly models long-range dependencies and coherent reasoning, as each step conditions on the full context to generate meaningful continuations. The high dimensionality of the token embedding space (e.g., 4096-dimensional vectors), where we apply noise, also underscores its complexity. We chose this task for its practical relevance in modern LLMs and its ability to demonstrate our method’s scalability to large label spaces. Sequence-to-sequence tasks are indeed another compelling setting, and we appreciate the suggestion and will explore such extensions as a natural direction for future work. > **Q5: Are there other examples of label learning tasks beyond supervised learning and this work?** The reason we haven’t strictly confined PCL to supervised learning is that the mapping from X to Y is fundamental across many learning paradigms, including semi-supervised, weakly supervised, and even self-supervised settings. While our current experiments focus on standard supervised learning for clarity and validation, PCL’s core mechanism (refining the X-to-Y mapping) is intentionally designed to be adaptable. We believe this flexibility opens doors for future extensions into broader learning frameworks, and we’re excited to see explorations in these directions in subsequent works. --- We hope this response could help address your concerns, and we are more than happy to address any further concerns you may have. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications, these are quite helpful! I will maintain my score and recommend acceptance. --- Reply to Comment 1.1.1: Comment: We are deeply grateful for your thoughtful feedback and continued support. Your constructive engagement and valuable suggestions have been instrumental in this process. We will carefully incorporate the additional explanations into the manuscript to enhance clarity and ensure that the key points are more clearly articulated. Thank you once more.
Summary: The paper introduces Predictive Consistency Learning (PCL), a paradigm for tasks with complex labels. The paper starts with arguing that traditional supervised learning struggles with high-dimensional or structured labels due to the difficulty of mapping compressed input features directly to rich label spaces. Inspired from image diffusion, PCL addresses this by decomposing label learning into a progressive process: it introduces noise-perturbed labels as additional inputs during training and enforces consistency across predictions at different noise levels. Experiments on vision (semantic segmentation), graph (N-body simulation), and text (LLM fine-tuning) tasks demonstrate PCL's effectiveness over standard supervised learning. Claims And Evidence: The claims are supported by empirical evidence across diverse tasks. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: Experiments are sound but can be made more through with comparison against advanced baselines such as diffusion models for structured labels (for e.g. SegDiff [1] for segmentation). --- [1] SegDiff: Image Segmentation with Diffusion Probabilistic Models, https://arxiv.org/pdf/2112.00390 Supplementary Material: Briefly. Relation To Broader Scientific Literature: PCL builds on consistency models and diffusion-based training but adapts them to deterministic label prediction. It connects to curriculum learning (progressive label complexity) and residual learning (noise as input). Essential References Not Discussed: No. Other Strengths And Weaknesses: ### Strengths - Paper is well presented with clear details - Experiments encompass multiple modality benchmarks ### Weakness - The thesis of the paper is that for complex output spaces standard supervised learning is not appropriate, it also acknowledges that there are methods which can project the output space to a simpler latent space but provides no qualitative comparison against them. Other Comments Or Suggestions: I think [extreme classification](http://manikvarma.org/downloads/XC/XMLRepository.html) can be a good benchmark for this work (as it inherently has a high dimensional predictive output space) Questions For Authors: 1. In what *predictive* cases is learning an invertible function from $Y_E \rightarrow Y$ not straightforward (hence necessating the need of the proposed approach)? dual encoder style modeling (which has a very simple inverse map) should be possible for most predictive tasks, right? 2. How does PCL differ from curriculum learning strategies that gradually increase label complexity? 3. Is there a train / inference time difference in the SL vs PCL training / inference? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the valuable comments, nice suggestions, and for acknowledging our work. Below we respond to your major comments. > **Q1: Qualitative comparison to methods that project the output space to a simpler latent space.** Introducing an encoder-decoder structure with a $Y\to Y_E$ encoder and a $Y_E\to Y$ decoder may help handle the complexity of $Y$. This approach is commonly used in generative models like Latent Diffusion Models (LDMs), where data (e.g., images or videos) are first compressed into a latent space for generation before being decoded back into the original domain. However, for predictive mapping from $X$ to $Y$, which is a fundamental component in ML, explicitly introducing an encoder-decoder stage adds computational burden. During inference, the encoder (mapping $Y \to Y_E$) is redundant, yet its inclusion increases training complexity. Additionally, the two-stage learning process (first $X \to Y_E$, then $Y_E \to Y$) can lead to error accumulation, degrading overall performance. In contrast, our method retains the simplicity of traditional supervised learning while enhancing performance through progressive label decomposition. This approach allows for more effective learning of label information with minimal yet refined modifications to the standard training framework. > **Q2: How does PCL differ from curriculum learning (CL) strategies that gradually increase label complexity?** The key differences between PCL and CL can be summarized as follows: 1. **Learning Target Consistency**. CL predefines staged learning targets (e.g., coarse-to-fine labels), where each stage learns an approximation of the true label. This risks error accumulation, i.e., biased features from early stages may propagate to later stages. In contrast, the objective of PCL remains stable: always predicting the true label, with complexity dynamically adjusted by the input noise level. 2. **Progressive vs. Simultaneous Learning**. CL follows a fixed, sequential progression (e.g., easy→hard labels). PCL, however, randomly samples time steps during training, enabling the model to learn various label information levels (from partial to complete) simultaneously, where the time step t acts as a controller for label granularity. 3. **Controllable Prediction**. PCL’s noise-conditioned framework allows explicit control over prediction granularity via time step t. As evidenced in Fig. 5, setting a larger t tends to encourage the model to improve broader structural relationships, while setting a smaller t encourages the model to focus on finer details. Further, PCL supports multi-step inference (Sec. 4.4), where predictions are iteratively refined (analogous to diffusion denoising), boosting final accuracy. 4. **Leveraging Partial Labels as Input**. One of the motivations of PCL is to treat labels not just as targets but as learning facilitators. By feeding partially noised labels (e.g., "hints") during training, the model learns to exploit intermediate information (similar to how humans use reference solutions to help solve problems). > **Q3: Is there a train / inference time difference in the SL vs PCL training / inference?** For training, since the loss calculation requires two inference predictions with different noise levels during training, it requires twice the training cost of the traditional supervised learning paradigm. Moreover, it typically takes less than twice the iterations for the model to converge. For inference, PCL can produce much superior predictions with merely a single forward pass as SL does. Meanwhile, PCL can achieve better performance with more forward passes. As shown in Fig. 3 using a GCN backbone on the 3,2,1 dataset, even single-step PCL inference achieves superior performance (prediction error 0.02835) compared to SL (0.03478), while additional inference steps (e.g., 5-step) can further refine results (0.02795). > **Q4: Experiments are sound but can be made more thorough with comparison against advanced baselines such as diffusion models for structured labels.** Thanks for the acknowledgment and suggestion. Our method primarily takes traditional supervised learning as our main framework, where PCL is constructed within. The generative modeling approach you mentioned essentially reformulates the prediction task as a generation task, representing one specific implementation under the broader supervised learning framework ($X\to Y$ mapping). PCL operates at a different conceptual level, whose methodology for label utilization could potentially be integrated into diffusion-based prediction as well. In a word, our current framework is a bit orthogonal to and can be combined with the diffusion models. In our revision, we will discuss relevant works in this direction and explore subsequent attempts in future work. --- We hope this response could help address your concerns, and we are more than happy to address any further concerns you may have.
null
null
null
null
null
null
SBGD: Improving Graph Diffusion Generative Model via Stochastic Block Diffusion
Accept (poster)
Summary: This paper introduces the SBGD model to address the scalability and size generalization challenges of Graph Diffusion Generative Models (GDGMs). Traditional GDGMs struggle with high memory requirements and poor generalization to graph sizes not seen during training. SBGD mitigates these issues by refining graph representations into a block graph space, leveraging structural priors to reduce memory complexity and enhance generalization. Empirical evaluations demonstrate that SBGD achieves memory savings while maintaining the graph generation performance of state-of-the-art methods. Claims And Evidence: The proposed SBGD method claims improved computational efficiency and generalizability. While the authors provide complexity comparisons in Table 1 and memory ratios in Table 2, an empirical analysis of training time would strengthen their efficiency claims. Regarding generalizability, the term "graph size" is somewhat ambiguous. In Figure 3, it appears to refer to the density of the edges. Please clarify the definition of "graph size." Methods And Evaluation Criteria: The method is straightforward yet appears effective. The decomposition of large graphs into blocks facilitates sparse graph construction. The efficiency and effectiveness of the proposed model are closely tied to how the graph is partitioned. The authors address this in Figure 4 with empirical results. How do the authors determine the number of partitions? Is it treated as a hyperparameter, or is it based on another metric? In Appendix C2, the authors use METIS partitioning, which results in balanced graph blocks with approximately equal node counts. This might not be the case for real graphs, while we might find unbalanced blocks. Did the authors compare this method with other partitioning techniques? Theoretical Claims: The authors provide a theoretical discussion on model complexity. It would be beneficial to include more details on how these results are derived. Experimental Designs Or Analyses: The experiments are well-designed and effectively support the claims of efficiency and generalizability. One thing to note is that the authors missed one state-of-the-art baseline DruM[1]. Please add that into comparison if possible. [1] Graph Generation with Diffusion Mixture, ICML 2024 Supplementary Material: The appendix offers a substantial amount of background, derivations, and technical details that complement the main manuscript. Relation To Broader Scientific Literature: Graph generation is a significant issue with applications across various domains. Diffusion-based graph generative models struggle with large graphs, and the proposed SBGD addresses this through block-wise modeling. While this contribution is noteworthy, a more in-depth discussion of its potential positive and negative impacts would be beneficial. Essential References Not Discussed: See Experimental Designs Or Analyses for DruM[1]. [1] Graph Generation with Diffusion Mixture, ICML 2024 Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: See above. Questions For Authors: The authors mention distributed training for block-wise modeling in their theoretical discussion, which is somewhat unconvincing. Node-wise samplers are well-implemented in both DGL and PYG and support distributed training. Additionally, treating blocks as batches may lead to information loss if edges between blocks are ignored. Can the authors elaborate on this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer, thank you for the insightful comments and helpful suggestions. These greatly help us improve upon the current paper, and we appreciate the opportunity for addressing your questions and concerns here. ## Clarification of Graph Size We intended the term "graph size" to specifically refer to the number of nodes. The confusion may stem from the visualizations in Figure 3, where denser layouts naturally arise as the node count increases (given a fixed edge-to-node ratio), making the graphs appear more connected. We will revise the manuscript to make this definition explicit and improve the accompanying figure captions for clarity. ## Determination of the Number of Partitions The number of partitions is treated as a tunable hyperparameter. As shown in Figure 4(b), our empirical analysis suggests that the optimal number of partitions depends on both the dataset and task. Extremely fine or coarse partitions tend to degrade performance due to underfitting or oversmoothing, respectively. We will explicitly state this in the revised manuscript and offer guidance on selecting this hyperparameter based on validation performance. ## Comparison with Other Partitioning Techniques Our use of the METIS partitioning algorithm is motivated by the core modeling assumption that intra-block (diagonal) structures are dense and relatively homogeneous, while inter-block (off-diagonal) interactions are sparse. METIS is well-suited for this setting, as it efficiently divides graphs into balanced clusters while minimizing edge cuts. We did experiment with random partitioning during early development, but observed significantly worse performance—likely due to incoherent block structures and noisier learning dynamics. We will aim to include a more systematic study of this comparison in the revised version. ## Details on Theoretical Derivation Thank you for pointing this out. We agree that providing more detailed derivation steps will enhance the clarity and completeness of the theoretical section. We will expand this part in the revision to include step-by-step derivations and clarify underlying assumptions. ## Comparison with DruM (ICML 2024) We appreciate the pointer to DruM—an interesting and related work. While both papers share a modular principle, the core assumptions and objectives differ: - DruM assumes that the graph distribution is multi-modal, and it decomposes the generative process into learning distinct modes to facilitate inference. - In contrast, our approach focuses on structural decomposition for memory efficiency, leveraging graph partitioning to operate on block-level representations. Our method does not rely on mixture modeling of distributions and avoids storing the full graph at once. We will cite DruM in the revised manuscript and include a more thorough discussion in the related work section to highlight the differences and complementarity. ## Discussion on Broader Impacts Thank you for noting the omission of an impact statement. We will include one in the revised manuscript. Our method is designed for memory-efficient graph generation, which could lower the barrier for training generative models in resource-constrained environments, such as laboratory settings or edge devices. We do not foresee any immediate or specific negative societal impacts, but will include a balanced impact statement in line with standard publication policies. ## Clarification on Distributed Training and Information Loss We appreciate this question. In our training procedure, the fundamental unit is a **block pair**, which consists of two block graphs and their mutual interactions. In addition to training score networks for each block, we train a lightweight interaction network to model the off-diagonal connections. This design enables distributed training, where block pairs can be allocated across multiple devices without the need for inter-device communication, since each block pair is self-contained. We will clarify this in the revised manuscript to highlight the scalability benefits of our approach. Once again, we sincerely thank the reviewer for the thoughtful and constructive feedback. We hope our response has satisfactorily addressed your questions and concerns. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I will keep my score to support this work. --- Reply to Comment 1.1.1: Comment: Thank you very much for your support and for taking the time to consider our rebuttal. We truly appreciate your constructive feedback and encouragement.
Summary: The manuscript introduces SBGD, a graph diffusion generative model based on a block representation inductive bias and aiming for lower memory complexity in the large-graph limit. SBGD first partitions the graph's nodes into $k$ non-overlapping groups using some pre-determined non-learned algorithm (here METIS). These groups partition the adjacency matrix into blocks of edges within the same group (diagonal blocks) or between two different groups (off-diagonal blocks). The Markovian forward process is decomposed into three independent "noising" conditional distribution: noising the node features $X_{i}^{(t)}$ group $i$'s nodes, noising the $i$-th diagonal block $A_{i}^{(t)}$ or noising the off-diagonal block $A_{ij}^{(t)}$ between group $i$ and group $j$. The noising and learned network score functions for the features and diagonal blocks are very Digress-like, and something lighter (not 100% clear to me at this time) is done for the off-diagonal blocks. The (sampling) backward process for each group is independent (though node features and diagonal blocks are jointly generated), and off-diagonal blocks are generated in view of the two groups they connect (again, details not 100% clear). Empirical results appear good. The theoretical scaling complexity is improved, and the manuscript's algorithm can run on OGBN-products while other approaches OOM on the same (frugal-ish) hardware. Claims And Evidence: (Claim numbering is my own.) ### Claim 1 > Empirical results show that SBGD achieves significant memory improvements (up to 6×) The core idea of breaking down a large graph into block structures to improve memory efficiency makes a lot of sense, and this claim is supported both theoretically (Section 3.3 and Table 1) and experimentally (last column of Table 2). ### Claim 2 > while maintaining comparable or even superior graph generation performance relative to state-of-the-art methods. The results presented in Table 2 support this claim that the method offers competitive performances despite the memory savings. I have reservations as to how performances on this type of benchmark would translate to concrete use-case scenario, but this is an issue pervading the whole generative graph modeling subfield: I don't think that I can fault the manuscript for this. ### Claim 3 > experiments on size generalization demonstrate that SBGD exhibit better size generation, particularly exceling at generating graphs larger than those seen in the training set I fail to find the experimental setting details for the results shown in Figures 3 and 4(a). Figure 3 is not convincing to me: the training graph (leftmost) in Figure 3 has block-like structure quite aligned with the manuscript's main new modeling assumption, and if similar graphs are used in Figure 4(a), then the manuscript's claim is greatly weakened, and should minimally be amended to clarify this reliance on very specific graph structures. ### Claim 4 > that smaller block representations initially improve performance, but excessively small blocks can degrade generative quality The claim makes sense and is supported by Figures 4(b-c), although it is not clear what dataset/protocol was used to produce these figures. The suggestion of "an optimal block size, with granularity depending on the data’s properties" could have (but wasn't) easily probed by plotting Figures 4 (b-c) for different datasets. ### Claim 5 > it exemplifies the principle of modularization in generative modeling, offering a novel way to explore generative models by decomposing complex tasks into more manageable components It does. Writing this review spurred many ideas as to how to "fix" this work's weakest points, ideas made "obvious" now that this corner of the design space has been made visible to me. This manuscript has faults, but its core idea is both valid and important. Methods And Evaluation Criteria: As mentioned above, there appear to be no details (neither in the main paper nor appendices) as to the experimental setting surrounding Figures 3 and 4. Also as mentioned above, to the best of my understanding, the evaluation criteria are limited but standard for the subfield. I believe that the presentation of SBGD lacks important details necessary for reproducibility, notably on the front of the off diagonal blocks. In particular, lines 8 and 10 of Algorithm 2 (Appendix B) are incomplete (and/or incompatible). Is $\widehat{A}_{12}$ obtained in one shot, or by DDIM/DDPM? This needs to be clear! Moreover, my understanding is that lines 6 and 7 of the same Algorithm 2 should both involve two separate calls to the score functions (like is done for the features on line 5), and that what is currently displayed on those lines implies a coupling between the diagonal blocks that is not mentioned elsewhere in the manuscript. Also, compared to Algorithm 1, the explicit dependencies in features are gone. This appendix appears to aim for conciseness, at the cost of clarity/correctness, but there is no space limit for appendices. Please be clear! Theoretical Claims: I looked at all equations in the main text, but did not check any proofs in details. I gave some more attention to the algorithms, and found them wanting. Experimental Designs Or Analyses: I already mentioned issues surrounding Figures 3 and 4. I am not anymore following this subfield in enough details to properly assess the details surrounding Table 2. Supplementary Material: I did not do an in depth review of the appendices. I gave some particular attention to Appendix B, and I dug up the use of METIS from Appendix C. I browsed Appendix D to assess that it did not give details about Figures 3 and 4. Relation To Broader Scientific Literature: I've been out of date with the subfield for the last 2 years. The paper that I know should be cited, are cited. What is surprising is the absence of papers that I don't know: except for Aref & Mostajabdaveh (2024) cited in appendix, it looks like I didn't miss much in the last two years... That, or there are missing references! Essential References Not Discussed: See above. Other Strengths And Weaknesses: There are typos, inconsistencies and/or lack of clarity at critical points in the manuscript. I have already mentioned Algorithm 2's deficiencies elsewhere. Figure 2 uses $\mathbb{P}$ to represent the forward (noising) process, and $\mathbb{Q}$ for the backward (denoising) one, but the unnumbered equations in Section 2.1 use the opposite. The caption of Figure 2 does not go in enough details to help me understand the model (and it has another typo in the expression for $\mathbb{Q}$). Further ablation studies are lacking, notably comparing random partitioning vs METIS. I don't currently understand the details as to how the off-diagonal blocks are obtained, but when I'll do I'll probably wonder what happens if this network is given as much capacity as the diagonal one (or the converse). Other Comments Or Suggestions: The manuscript favours an "assortative" view, where the off-diagonal blocks are sparser than the diagonal ones. But disassortative networks exist, and some generalization of this model could address them. More generally, I suggest that the authors ponder what are the "blind spots" of this modeling approach, and discuss them, the cases where they may be more/less relevant, and potential workarounds. For example, suppose a training dataset where all graphs have a single node that is connected to most other nodes, whereas all other nodes' degree have an hard cap that is much lower than the block size. In the present algorithm, each diagonal block is generated independently, so there is no way to guarantee that exactly one such block will contain exactly one high-degree node. This is an example of a "blind spot", some structure that this model cannot capture. Other examples includes anything that involves at least 3 diagonal blocks or 2 non-diagonal ones. Here is a variation that I think could eschew many of these limitations. We can view the problem as the autoregressive generation of blocks (diagonal or not) one-by-one, where generating each block demands to perform DDIM or DDPM. In this view, this paper chooses to generate all diagonal blocks first, then generate the off-diagonal ones conditional on the two diagonal blocks they each connect. However, other orders are possible, and the conditioning could *a priori* depend on all previously generated blocks using, e.g., GNNs. If the graphs are sparse, there is a regime where these GNNs can be kept in check while preserving the $C^2$ memory complexity. Questions For Authors: My score of 2 indicates that, in it's current state, I would reject this manuscript, but I do see a possible path forward for me to recommend acceptance. This path depends on the answers to the following questions. ### Question 1 Do you disagree with, or wish to make any clarification about, any assessment I made in this review? If yes, could you please clarify your perspective? ### Question 2 Is an actual forward (noising) process involved for the off-diagonal adjacency matrices? Is the inference done in one-shot? ### Question 3 Can you clarify/fix Algorithm 2? Please be explicit, using $i$ and $j$ instead of $1$ and $2$ to clearly indicate how to proceed when there are more than two blocks. In other words, I should be able to read the "Computation Complexity" of Figure 1 from the loop structure. Please pay special attention to function signatures: if $s_\theta$ accepts 4 arguments and returns one output in Algorithm 1, it should accept 4 arguments and return one output in Algorithm 2. You *may* do the same for Algorithm 1, especially if you notice some consistency/correctness issues while fixing Algorithm 2. ### Question 4 Can you provide all experimental details surrounding Figures 3 and 4? Please see my comments on Claims 3 and 4 above for details. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your knowledgeable comments and insightful feedback. We greatly appreciate the time and effort you’ve invested in evaluating our work. Below, we address your concerns and questions. Please note that the responses may not follow the exact order of your original remarks, but we have ensured all points are addressed thoroughly. ## Algorithms for Training and Sampling We sincerely apologize for the confusion caused by the outdated pseudocode and associated inaccuracies in the description of our sampling procedure. Below, we clarify both the training and sampling processes, and we will revise the pseudocode accordingly in the updated manuscript. - **Training Process:** Our training approach follows a strategy similar to ClusterGCN [1]. At each iteration, we randomly sample two diagonal blocks and perform score learning using the diffusion model. The interaction between these two blocks (i.e., the off-diagonal component) is not subjected to the diffusion process but is instead inferred using a lightweight interaction network in a one-shot manner. As discussed in the paper, the motivation for this design is rooted in the fact that off-diagonal interactions tend to be sparse and easier to learn directly. - **Sampling Process:** Once the score networks and interaction networks are trained, graph generation are very flexible. First, one can generate the $k$ diagonal blocks using any preferred sampling method (e.g., DDPM, DDIM, or other variants). Then, the off-diagonal connections between each pair of blocks are generated using the interaction network. We regret the confusion and will ensure the corrected pseudocode and explanations are clearly reflected in the revision. ## Experimental Setup for Figures 3 and 4 Thank you for pointing this out. Both Figures 3 and 4 use graphs generated from the cSBM model; however, the training graph sizes differ slightly: In Figure 3, training graphs have a size of 100. In Figure 4, training graphs have a size of 180, which matches the size used in our main experiments (e.g., Table 2). Additionally, the x-axis labels in Figure 4 should be interpreted as scaled by 100. We apologize for the oversight and will clarify this in the revised version of the manuscript. ## Limitations of the Model and Potential Extensions Thank you for your insightful critique regarding the model’s blind spots. We fully acknowledge that our approach—emphasizing assortative, block-wise dense structures—may not effectively capture certain types of graphs, particularly disassortative networks or structures like star graphs. These represent important limitations of the current modeling framework. That said, our structural prior is not arbitrary—it reflects common empirical patterns observed in many real-world networks, where assortativity and local structural cohesion are prevalent. In such domains, our assumptions are well-aligned with the underlying data distribution, contributing to the strong empirical results. We find your proposed variation—autoregressive generation of both diagonal and off-diagonal blocks, with conditioning on previously generated content using GNNs—particularly compelling and interesting! This approach could capture richer inter-block dependencies and remain scalable controlled memory use. This is an exciting direction for future research, and we are grateful for your thoughtful suggestion. ## Ablation on the Partition Algorithm Our choice of the METIS partitioning algorithm is motivated by our core modeling assumption: dense and homogeneous intra-block (diagonal) structure, with sparse inter-block (off-diagonal) connections. METIS is well-known for effectively partitioning graphs into balanced clusters while minimizing edge cuts—making it a natural fit for our setting. In early experiments, we did explore random partitioning, but found that it yielded significantly worse performance, likely due to poorly aligned block structures and noisier diffusion learning. We will try to include a more systematic study on this in the revision. Once again, we sincerely thank you for your constructive feedback. Your comments have helped us identify critical areas for clarification and potential extension, and have significantly improved the clarity, correctness, and quality of our manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. With the understanding that the camera ready will be adapted accordingly, I hereby increase my Overall Recommendation from 2 to 4. --- Reply to Comment 1.1.1: Comment: Again, thank you very much for taking the time to review our submission and for your thoughtful feedback. We sincerely appreciate your reconsideration and the decision to raise your scores. We will make sure to incorporate the points from our discussion in the revised version.
Summary: This paper proposes a block graph representation on top of the diffusion model framework for graph generation. It claims to resolve the scalability and size generalization problems of existing models and comes with experiments on various benchmark datasets. The paper is generally well-written and easy to follow. Claims And Evidence: 1. This paper claims to address the "scalability" concern of current graph generative models. To support this claim, it would be helpful to fully validate the approach with empirical experimental results. Could the authors compare the graph sizes in their benchmark datasets? For example, QM9 is relatively small in terms of average graph size (#nodes), while on the OGBN-products dataset, only the proposed method appears to train successfully without encountering an OOM error. Could the authors provide more details? 2. The paper claims to improve "size generalization" capability, demonstrated by the quantitative results in Fig. 4(a), where FID is used. However, since FID was originally proposed for image generation, it would be helpful if the authors could clarify what type of pre-trained models were used to compute FID in the context of graph evaluation. Providing these details would ensure a fair and rigorous comparison. Also, what are the experimental settings for Fig. 4(a)? Methods And Evaluation Criteria: 1. As mentioned above, some benchmark datasets may be too small to effectively validate the proposed method's ability to address the scalability issue. Please provide a clear comparison of their sizes. 2. Please explain the details of how FID is used for graph data. 3. Given that FCD and NSPDK (e.g., in [1,2]) are commonly used metrics for molecular generation tasks, could the authors clarify why these were not considered in the evaluation? [1] Yan Q, Liang Z, Song Y, Liao R, Wang L. Swingnn: Rethinking permutation invariance in diffusion models for graph generation. arXiv preprint arXiv:2307.01646. 2023 Jul 4. [2] Jo J, Lee S, Hwang SJ. Score-based generative modeling of graphs via the system of stochastic differential equations. InInternational conference on machine learning 2022 Jun 28 (pp. 10362-10383). PMLR. Theoretical Claims: The authors make several claims in Section 3.3: Theoretical Discussion, and below are my concerns. 1. First, it would be valuable to report the actual memory usage during both training and inference. On the same hardware, how does the model’s running speed compare to the baselines? While the big-O analysis in Table 1 is informative, there may be a gap between theoretical complexity analysis and practical deep neural network deployment. Additional details on real-world computational efficiency would strengthen the discussion. 2. The paper claims to have advantages in terms of "Distributed Training." However, I do not see any real experiments supporting the claim that "each computational node or device in a distributed system can handle smaller subproblems." While this may be conceptually true, the claim is fundamentally about practical implementation and must be validated by experiments to be considered valid. Experimental Designs Or Analyses: Already explained in the above comments. Supplementary Material: I did not find details on how FID is computed or the specific sizes of graphs in the benchmark datasets. Relation To Broader Scientific Literature: I don’t have any critical comments. Essential References Not Discussed: More recent papers on graph diffusion models should be included as baselines for quantitative performance comparisons and technical discussions. [1] Yan Q, Liang Z, Song Y, Liao R, Wang L. Swingnn: Rethinking permutation invariance in diffusion models for graph generation. arXiv preprint arXiv:2307.01646. 2023 Jul 4. [2] Laabid N, Rissanen S, Heinonen M, Solin A, Garg V. Equivariant Denoisers Cannot Copy Graphs: Align Your Graph Diffusion Models. InThe Thirteenth International Conference on Learning Representations. [3] Zhao L, Ding X, Akoglu L. Pard: Permutation-invariant autoregressive diffusion for graph generation. arXiv preprint arXiv:2402.03687. 2024 Feb 6. Other Strengths And Weaknesses: How is this paper different from previous work? Conceptually, it appears quite similar due to the decomposition process. [1] Zhao L, Ding X, Akoglu L. Pard: Permutation-invariant autoregressive diffusion for graph generation. arXiv preprint arXiv:2402.03687. 2024 Feb 6. Other Comments Or Suggestions: N/A Questions For Authors: Already explained in the above comments. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your detailed comments and thoughtful questions! They significantly help us improve the clarity and rigor of our manuscript. We appreciate the opportunity to address your concerns and questions below: ## Details on Graph Sizes in Datasets We apologize for omitting this information from the manuscript. The graph sizes used are publicly available, given the datasets utilized: 1. **Planar:** Graphs with 64 nodes. 2. **cSBM:** A synthetic graph dataset where graph size is configurable. For the main experiments (Table 2), we set the graph size to 180. 3. **QM9:** Graphs with sizes up to 9 nodes. 4. **OGBN-Arxiv:** Graphs up to 169,343 nodes. 5. **OGBN-Products:** Graphs up to 2,449,029 nodes, significantly larger than other datasets, which explains the "Out of Memory (OOM)" issues faced by most methods. We will include this explicitly in the revised manuscript. ## Clarification on FID Metric for Graph Evaluation We acknowledge your valid concern regarding the adaptation of FID (Frechet Inception Distance) from image to graph data. Following established practices in the literature ([1]), we adapted FID for graphs by utilizing a Graph Isomorphism Network (GIN) with random initialization as the embedding model. The embeddings were then used to compute FID scores following the standard procedure. We will clearly describe this approach in the updated manuscript. Regarding FCD and NSPDK, these metrics are specifically designed for molecular graphs, whereas our datasets primarily involve general graph structures. Thus, FID was chosen as a more universally applicable evaluation metric. ## Experimental Setup of Figure 4(a) For Figure 4(a), we used models trained on the cSBM dataset (graph size of 180) from Table 2, subsequently generating graphs of varying sizes. Please note the x-axis values represent graph size multiplied by 100. Figure 4 (a) shows that our method has the best size generalization performance among the tested methods. We apologize for any confusion and will clearly state this in the revised manuscript. ## Memory Usage and Computational Efficiency We appreciate your suggestion regarding memory usage. Alongside the theoretical analysis, we reported relative memory usage ratios from the experiments in Table 2's last column. We acknowledge discrepancies between theoretical predictions and practical performance, which stem from specific implementation details and hyperparameter selections. The revision will provide additional details, explicitly discussing runtime and implementation-dependent factors. ## Advantage for Distributed Training Due to technical constraints, we could not empirically verify advantages in distributed training scenarios. Therefore, we presented this as a theoretical discussion point rather than a main claim. This perspective will be clearly stated in the revised manuscript. ## Distinction from PARD Thank you for bringing PARD to our attention. While conceptually similar due to the principle on block structure, PARD represents a distinct class of graph generative models. It employs iterative component-based graph generation, aligning more closely with autoregressive models such as GraphRNN and EDGE. As such, PARD still require the entire graph in memory during generation, as noted in the original PARD paper ([2]). Our work specifically explores score-based diffusion generative models, thus positioning itself orthogonally to PARD. We appreciate the recommendation and will cite and discuss PARD clearly in the related work section of our revised manuscript. We greatly appreciate your insightful feedback and believe our revisions fully address your questions and concerns. [1] "On Evaluation Metrics for Graph Generative Models." *International Conference on Learning Representations (ICLR)*. [2] "PARD: Permutation-invariant Autoregressive Diffusion for Graph Generation," arXiv preprint arXiv:2402.03687, 2024.
null
null
null
null
null
null
null
null
BOPO: Neural Combinatorial Optimization via Best-anchored and Objective-guided Preference Optimization
Accept (poster)
Summary: The paper proposes to use (a specific variant of) preference-based RL losses instead of regular reward-based policy gradients for combinatorial optimization problems. The primary finding is that this Preference Optimization for Combinatorial Optimization (POCO) loss improves sample complexity and asymptotic performance over standard (PPO) and specialized baselines (SLIM), over multiple different combinatorial optimization problems. This is a surprising result, because the original reason preference losses are used is due to lack of precise rewards (e.g. with human feedback). I would've thought that for combinatorial optimization which indeed has precise rewards, using preferences (which is imprecise) can be worse. Claims And Evidence: Because of how unintuitive the results are (i.e. even with precise rewards, preference-based loss leads to better outcomes), I'm inclined to think of possible subtle numeric effects which led to this result. For example: * Reward normalization can be tricky with exact rewards, and training the value-head to predict advantages typically can cause problems if not careful. * For $n$ objective evaluations, preference optimization uses ${n \choose 2}$ pairs, while policy gradient would only use $n$, so preference optimization performs more gradient updates due to more data. The paper proposes filtering methods (uniform filtering and best-anchored pairing) to reduce the data size and only focus on diverse or high quality solutions. * If this is crucial to improving performance, I'm wondering if there would've also been a policy gradient variant (e.g. add a numeric transformation to upweight large rewards) that would've equally done well. I'm not asking the authors to ablate these (since this could require a lot of work), but I'm just stating that the devil's in the details. Methods And Evaluation Criteria: Yes, the paper evaluates over multiple different combinatorial optimization problems (Job-shop scheduling problem, Traveling Salesman Problem, Flexible Job-shop scheduling problem), with multiple baselines. I do think the presentation could be improved however (too many abbreviations and large tables, and too many random details in the text). It's possible to improve the experimental presentation pictorially using bars or graphs. Overall it's not a priority IMO to show outperformance over very domain-specific combinatorial optimization baselines - what is a priority, like I mentioned earlier, is the conclusion that preference-based RL beats exact-reward RL. Theoretical Claims: Not applicable, no theory. Experimental Designs Or Analyses: Please see my "Claims and Evidence" part. I'm sure that the authors did a solid job on running multiple experiments and baselines, but experimental RL requires a lot of small little subtle details which can strongly affect results. Supplementary Material: Mainly Section C, to verify what types of RL losses are being used. Relation To Broader Scientific Literature: It is interesting to see how some of the RL-HF literature (preference-based learning) can be adopted to classic RL (e.g. combinatorial optimization). Originally, preference-based losses were only created to deal with the noisiness of human feedback, but I would've never expected it to do better than traditional policy gradients for exact rewards. Essential References Not Discussed: N/A Other Strengths And Weaknesses: # Strengths Using RL-HF techniques like DPO for exact-reward problems, outside of human feedback, is an important topic for linking the field of LLM alignment with traditional RL, and I applaud this paper in doing so. # Weaknesses The current presentation of the paper makes the contribution incremental and not fundamental - i.e. due to lots of random details and abbreviations stuffed into the writing, the overall read can look like "We tried a different tweak and got slightly better results in a very specific domain". I would strongly encourage the authors to avoid this style of writing. Other Comments Or Suggestions: I'm not particularly for or against this paper - i.e. I'm pretty lukewarm about it, because: 1. When it comes to the actual combinatorial optimization improvements, the gains are fairly incremental, and overall combinatorial optimization has become very saturated. 2. The result that preference-based optimization outperforms exact rewards, _is_ interesting, but the paper isn't investigating this fundamental result in-so-much-as applying it for combinatorial optimization Questions For Authors: See my concerns above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable comments. >**Q1:** I would've thought that for combinatorial optimization which indeed has precise rewards, using preferences (which is imprecise) can be worse. We agree that precise rewards are also crucial to combinatorial optimization. **Meanwhile, we would like to clarify that POCO, unlike standard preference optimization techniques used in LLM, incorporates the precise objective value (i.e., reward in RL) into our loss function.** As highlighted in our submission, this key contribution stems from our insight into the critical role of precise rewards in combinatorial optimization. Specifically, the proposed POCO loss is $\log\sigma\left(\frac{g(y_l)}{g(y_w)}\left(\frac{\log\pi_\theta(y_w|x)}{|y_w|}-\frac{\log\pi_\theta(y_l|x)}{|y_l|}\right)\right)$, where $\frac{g(y_l)}{g(y_w)}$ is a scaling factor derived from precise objective values for the best solution and the inferior one. The results in Table 6 and discussions in Section 5.4 demonstrate the effectiveness of incorporating precise objective values into the POCO loss, e.g., POCO reduces the gap to 12.9% on the DMU benchmark, outperforming its variant without this scaling factor (gap 13.2%). >**Q2:** I'm inclined to think of possible subtle numeric effects which led to this result. While we acknowledge that numerical artifacts can sometimes confound comparisons, we emphasize that our core contribution, a novel preference-based training framework, is not attributable solely to such effects. * **By leveraging preference learning between the best solution and diverse inferior ones, POCO guides the model toward promising decision trajectories and discern suboptimal choices, demonstrating its clear advantage against RL methods.** POCO exhibits significantly superior solution quality and higher training efficiency ($48\times$ speedup for JSP), as shown in the response to Reviewer upvg Q5 and Q1, respectively. Note that POMO, a REINFORCE-based method, uses terminal rewards and computes baselines via means, eliminating value-head estimation and minimizing numeric effect risks. * Here is a closer look at policy gradient. POMO uses a baseline to distinguish positive/negative optimization signals for each sample: $(g(y) - b) \nabla_\theta \log \pi_{\theta}(y|x).$ In contrast, POCO directly distinguishes based on exact preferences: $\frac{g(y_l)}{g(y_w)}\frac{(1-\sigma(z))}{|y_w|}\nabla_\theta\log\pi_\theta(y_w|x)-\frac{g(y_l)}{g(y_w)}\frac{(1-\sigma(z))}{|y_l|}\nabla_\theta \log\pi_\theta(y_l|x),$ where $z=\frac{g(y_l)}{g(y_w)}\left(\frac{1}{|y_w|}\log \pi_\theta(y_w|x)-\frac{1}{|y_l|}\log\pi_\theta(y_l|x)\right)$. The optimization signal strength correlates with the log-likelihood difference $z$, requiring the model to maximize the probabilities gap between $y_w$ and $y_l$. This results in faster convergence and better performance. * **POCO's advantage primarily stems from pairwise preference learning, rather than simply being a RL variant with the proposed filtering method.** To demonstrate this point, we compare POCO with a POMO variant employing our Hybrid Rollout and Uniform Filtering (denoted as POMO+). The key difference lies in the loss: while POCO uses pairwise loss, POMO+ calculates a standard POMO loss based on the filtered $K$ solutions, as it lacks the pairwise mechanism. Results in Table H show that POMO+ is inferior to POCO, demonstrating the effectiveness of the preference learning of POCO. Moreover, POMO+ is even inferior to POMO, underscoring that the filtering method is tightly adapted to POCO and may not be applicable to general RL methods. Based on the above two points, we would like to reiterate that POCO's unique advantages stem from its preference learning and the two tightly coupled designs (the pair construction method and the loss function based on objective values), rather than simply relying on subtle numeric effects. **Table H: Gaps (%) on TSP** Method|TSP20|TSP50 -|-|- POMO|0.002|0.042 POMO+|0.005|0.081 POCO|**0.000**|**0.009** >**Q3:** Mainly Section C, to verify what types of RL losses are being used. We have stated typical RL losses in Section 3.1, including REINFORCE for routing problems and PPO for scheduling problems. According to your suggestions, we will add a detailed description and analysis of these RL losses to Appendix C. >**Q4:** The current presentation of the paper makes the contribution incremental and not fundamental ... Our main contribution is a novel training framework, distinctly different from traditional RL, that not only improves performance but also convergence speed. We will thoroughly proofread the paper to highlight this core contribution. >**Q5:** ..., the gains are fairly incremental, and overall combinatorial optimization has become very saturated. In the NCO field, such improvements are considered significant. POCO delivers substantial performance gains and faster convergence, as demonstrated in our responses to Reviewer upvg Q5 and Q1, respectively. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification, especially regarding the RL loss still taking into account the objective values (and not just pairwise comparison). I will increase my score, although I am still lukewarm about this paper. --- Reply to Comment 1.1.1: Comment: Thank you for your clarification and for acknowledging our explanation. We appreciate your recognition and are encouraged by the increased score, even if you remain cautiously optimistic about the paper.
Summary: This paper first introduce the concept of Preference Optimization to the area of NCO. Generally, as the expected advantage of PO, POCO demonstrates instance-efficiency comparing to RL and SLL. (As shown in Figure 2 and Figure 3) ## update after rebuttal: This is a novel and inspiring paper, so I keep my positive rating. Claims And Evidence: Some claims in the paragraph ``Comparison with Other Losses.`` are not clear. The reference model is actually a constraint for LLM, I do not agree that this can be a characteristic of the POCO loss. Instead, the Adaptive Scaling part can distinguish DPO and the POCO loss. POCO loss excludes the target reward margin term in SimPO but this is not discussed in that paragraph. Methods And Evaluation Criteria: Generally okay. Theoretical Claims: This paper do not contain theoretical claims. Experimental Designs Or Analyses: I recommend to also includes SL-based POMO as baseline. [1] [1] Yao S, Lin X, Wang J, et al. Rethinking Supervised Learning based Neural Combinatorial Optimization for Routing Problem[J]. ACM Transactions on Evolutionary Learning, 2024. Supplementary Material: Yes. I review the detailed implementation of Networks, illustrations of losses, and definition of CO problems. Relation To Broader Scientific Literature: This paper discuss a new training paradigm for NCO, i.e., Preference Optimization. POCO shows advantages in intance-efficiency and final performance. Essential References Not Discussed: This paper has included prior related findings. Other Strengths And Weaknesses: Strength: 1. Introducing Preference Optimization to NCO is novel. 2. Compared to RL-based and SLL-based baselines, POCO exhibits instance-efficiency in training.  Weakness: Generally, there are no major weaknesses in this paper. One minor weakness of this paper is that the preparation stage for POCO (i.e., ``Preference Pair Construction``) can be time-consuming compared to RL-based methods. So POCO can hardly be extended to train on large-scale CO. Other Comments Or Suggestions: Please refer to Questions For Authors. Questions For Authors: 1. Section 3.1. Neural Combinatorial Optimization (NCO) provides a detailed introduction to RL methods. As far as I know, there is not much work using PPO in NCO training, so is this part of the description a bit redundant? 2. The design of Adaptive Scaling is reasonable, but my concern is that the distribution of objective function values is often different over different CO problems. Will using raw scaling factors lose stability on different problems? Can rank-based factors or factors after normalization be better? 3. Can you provide a rough preparation time (used for ``Preference Pair Construction``) for each instance? 4. Can POCO generalize to NCO models other than POMO on TSP? It would be great if POCO could still demonstrate its performance on other models. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable comments. >**Q1:** I recommend to also includes SL-based POMO as baseline. We have added results from the SL-based method [1] to the table E. Since all methods use the same model, differences stem from training algorithms. **POCO achieves comparable or better performance than SL-based methods without requiring labeled optimal solutions, further demonstrating the superiority of POCO**. Results show that on TSP20, SL-based and POCO perform identically. On TSP50, SL-based slightly outperforms POCO (0.001% vs 0.01%). However, on TSP100, POCO is superior (0.04% vs 0.05%). **Table E: Gaps (%) on TSP Instances** Method|TSP20|TSP50|TSP100 -|-|-|- POMO(aug)|0.00|0.03|0.14 SLIM(aug)|0.01|0.15|1.17 SL(aug)|**0.00**|**0.00**|0.05 POCO(aug)|**0.00**|0.01|**0.04** [1] Rethinking Supervised Learning based Neural Combinatorial Optimization for Routing Problem. ACM Transactions on Evolutionary Learning, 2024. >**Q2:** the preparation stage for POCO can be time-consuming compared to RL-based methods. Can you provide a preparation time for each instance? This problem is mainly caused by us sampling more solutions (larger $B$). We provide the following clarification: 1. **POCO does not necessarily require sampling a large number of solutions to maintain performance**. As noted in our response to Reviewer upvg Q4, POCO still achieved superior performance when sampling the same number of solutions as POMO, although we recommend an best value of 256 for $B$. 2. **Training time does not significantly increase with larger** $B$. Table F shows the approximate training time per epoch for different problem scales and $B$. For small-scale problems, the training time has hardly changed. For larger problems like TSP100, even if $B$ is increased by 2.5 times, the training time only increases by 1.6 times. **Table F: Training Time Per Epoch** $B$|TSP20|TSP50|TSP100 -|-|-|-| 20/50/100|2m30s|4m30s|12m 128|2m30s|5m|- 256|-|-|20m >**Q3:** As far as I know, there is not much work using PPO in NCO training, so is this part of the description a bit redundant? We acknowledge that REINFORCE is the dominant training algorithm for routing problems. However, as noted in our paper, PPO is a typical method for scheduling problems [2-4]. We remain open to further refinements to enhance clarity and relevance. [2] Learning to dispatch for job shop scheduling via deep reinforcement learning. NeurIPS 2020. [3] Flexible job-shop scheduling via graph neural network and deep reinforcement learning. IEEE Transactions on Industrial Informatics, 2023. [4] Solving flexible job shop scheduling problems via deep reinforcement learning. Expert Systems with Applications, 2024. >**Q4:** Will using raw scaling factors lose stability on different problems? Can rank-based factors or factors after normalization be better? This is a very interesting question. We agree this is a valuable direction for future research. 1. We observed that scaling factors vary across different problems, with distributions differing significantly. For some challenging problems, such as those where objective values have large or minimal gaps, raw scaling factors might fail. This is an important area for further exploration. 2. Rank-based factors offer stability in any scenario but may not accurately reflect true solution differences, potentially leading to suboptimal performance. Normalized factors, however, seem promising, with techniques like normalization, clipping, or compression functions offering potential improvements. >**Q5:** Can POCO generalize to NCO models other than POMO on TSP? We would like to emphasize that POCO, like POMO, is a general and fundamental training algorithm. POMO serves as the backbone for most SOTA methods, so naturally, the backbone of these SOTA methods can also be replaced with the superior POCO. To further showcase this potential, we conducted additional experiments on InViT [5]. Specifically, we adopt POCO to train InViT, but due to memory constraints, we set the batch size to 64 for InViT (also $B$ for InViT+POCO), with $K=8$. We retrained InViT with identical parameters. Results in Table G shows that InViT+POCO outperformes InViT on various distributions and larger-scale problems. **Table G: Gaps (%) on TSP** Method|U-1k|U-5k|U-10k|C-1k|C-5k|C-10k -|-|-|-|-|-|- InViT-2V|6.49|8.21|6.61|10.01|10.36|9.23 InViT-2V+POCO|**5.98**|**7.67**|**6.00**|**9.58**|**9.81**|**9.18** Method|E-1k|E-5k|E-10k|I-1k|I-5k|I-10k -|-|-|-|-|-|- InViT-2V|9.61|11.58|9.72|7.95|9.42|7.39 InViT-2V+POCO|**8.67**|**10.59**|**8.94**|**7.38**|**8.88**|**7.04** Distribution: Uniform (U), Cluster (C), Explosion (E), Implosion (I). The numbers behind represent the problem scales. Note that we report the results both for 300 epochs due to extended training time. Full results for 800 epochs will be provided in the subsequent discussion phase. [5] InViT: A generalizable routing problem solver with invariant nested view transformer. ICML 2024. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I will keep my accept score. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and for maintaining your accept score. We greatly appreciate your support and consideration.
Summary: The introduction of POCO, a training paradigm for NCO to enhance sample efficiency. This is accomplished by (1) design a preference pair construction method for improving exploration and exploitation, (2) gradual building of the loss function. This is evaluated on three problems Job shop scheduling, TSP, and FJTSP. ## Update after rebuttal: The authors have addressed most of my comments. I would like to thank them for the efforts. I will increase my score to 3. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. They are mostly sound. Supplementary Material: Yes. All parts. Relation To Broader Scientific Literature: The authors provide a new training paradigm for NCO that is architecture-agnostic. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengthens: - No need for labels for the training graphs. - OOD evaluation at inference. - POCO evaluation with different backbone models. ## Weaknesses: - Subpar results when compared to SOTA heuristic Concorde. Results are only good when compared to other NCO methods. Concorde is not used for TSP. Is there a reason? In general, what is the advantage of using this NCO when compared to LKH3 and Concorde? - How about TSP-1000? - Scalability results are missing for TSP? can the authors test their trained model on instances where the ILP fails? Other Comments Or Suggestions: See Other Strengths And Weaknesses. Questions For Authors: See Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging feedback and incisive questions. >**Q1:** Concorde is not used for TSP. We have included Concorde's results from POMO [1] and added them to the table C for comparison. Concorde, LKH3, and Gurobi are traditional iterative search-based algorithms with identical performance (0.0% gap), differing only in solving time. LKH3, being a strong heuristic solver, can offer fast approximate solutions for large-scale problems and even delivery the optimal solutions for small-scale problems. Including Concorde's results does not alter our existing conclusions, as we have already incorporated other strong heuristic methods. **Table C: Gaps (%) on TSP Instances** |Method|TSP20 Gap|TSP20 Time|TSP50 Gap|TSP50 Time|TSP100 Gap| TSP100 Time| |-|-|-|-|-|-|-| |Concorde|0.00|5m|0.00|13m|0.00|1h| |Gurobi|0.00|42s|0.00|6m|0.00|25m| |LKH3|0.00|7s|0.00| 2m|0.00|17m| |POMO (aug.)|0.00|3.6s|0.03|6.6s|0.14| 18.1s| |SLIM (aug.)|0.01| 3.6s|0.15|6.6s|1.17|18.1s| |POCO (aug.)|**0.00**|3.6s|**0.01**|6.6s|**0.04**|18.1s| [1] POMO: Policy optimization with multiple optima for reinforcement learning. NeurIPS 2020. >**Q2:** Subpar results when compared to SOTA heuristic Concorde. In general, what is the advantage of using this NCO when compared to LKH3 and Concorde? Unlike traditional iterative search-based methods, NCO without expert knowledge balances solving time and performance, so its contribution should not be evaluated solely based on performance. * **A key advantage of NCO is its ability to achieve near-optimal solutions with significantly shorter solving times, especially for large-scale problems**: * For TSP100, POCO achieves a near-optimal gap of 0.04% while reducing solving time dramatically (17m for LKH3 vs. 18s for POCO). * On large-scale JSP problems (TA 100x20), NCO demonstrates clear advantages. Exact algorithms like Gurobi and OR-Tools, within 1 hour (see Table 12 in Appendix D), achieve 11% and 3.9% gap (see Table 1), respectively, which are inferior to the 1.4% gap POCO achieves in just 8 seconds (see Table 12 and Table 1, respectively). * **Another advantage of NCO is that it does not require expert knowledge to design complex algorithms**, thereby expanding its applicability to a wider range of scenarios. >**Q3:** How about TSP-1000? Can the authors test their trained model on instances where the ILP fails? To further showcase the potential of POCO on large-scale problems, we conducted additional experiments on InViT [2] framework, which is designed to solve large-scale problems where the ILP fails. Note that POMO is not designed for large-scale problems, but we have also demonstrated the superiority of POCO with generalization experiments on TSPLIB (see Table 3). Main results generalizing to various distributions and larger-scales are shown in Table D. InViT+POCO outperformes InViT on all distributions and scales, underscoring its superiority. At these scales, Gurobi and LKH3 require 30 minutes to 1.5 days to solve, whereas NCO needs only up to approximately 10 minutes. **Table D: Gaps (%) Across Different Distributions and Scales** |Method|U-1k|U-5k|U-10k|C-1k|C-5k|C-10k| |-|-|-|-|-|-|-| |InViT-2V|6.49|8.21|6.61|10.01|10.36|9.23| |InViT-2V+POCO|**5.98**|**7.67**|**6.00**|**9.58**|**9.81**|**9.18**| |Method|E-1k|E-5k|E-10k|I-1k|I-5k|I-10k| |-|-|-|-|-|-|-| |InViT-2V|9.61|11.58|9.72|7.95|9.42|7.39| |InViT-2V+POCO|**8.67**|**10.59**|**8.94**|**7.38**|**8.88**|**7.04**| "U": Uniform, "C": Cluster, "E": Explosion, "I": Implosion distributions. The numbers behind represent the problem scales. Note that we report the results both for 300 epochs due to extended training time. Full results for 800 epochs will be provided in the subsequent discussion phase. [2] InViT: A generalizable routing problem solver with invariant nested view transformer. ICML 2024.
Summary: This paper proposes POCO (Preference Optimization for Combinatorial Optimization), a method that integrates preference learning into reinforcement learning (RL) to address combinatorial optimization problems. Additionally, the authors introduce Hybrid Rollout, Uniform Filtering, and Best-anchored Pairing as methods for constructing preference pairs used in preference learning. The proposed approach is evaluated and tested on combinatorial optimization tasks including the Job-shop Scheduling Problem (JSP), Traveling Salesman Problem (TSP), and Flexible Job-shop Scheduling Problem (FJSP). Claims And Evidence: This paper proposes a method for training combinatorial optimization (CO) solver models using preference learning and introduces specific methods for constructing preference pairs required for this approach. However, the rationale behind applying preference learning to train CO solver models is insufficiently explained. Although the introduction states that preference learning was introduced to improve sample efficiency, it remains unclear precisely how sample efficiency is enhanced by this method. Moreover, similar to most existing Neural CO studies, all three experiments performed in this paper utilized *generated* training datasets. Experiments were conducted on Job-shop Scheduling Problem (JSP), Traveling Salesman Problem (TSP), and Flexible Job-shop Scheduling Problem (FJSP). However, the experimental setup for TSP appears inadequate. Further details regarding this issue are provided in the 'Experimental Designs Or Analyses' section. Methods And Evaluation Criteria: As mentioned earlier, the benefits of applying preference learning proposed in this paper are not clearly demonstrated. Furthermore, in the preference pair construction method, rollouts are conducted B times, but only B/K of these rollouts are utilized for model updates, discarding the remaining data. This approach inevitably results in a loss of training resources due to discarding rollout data by a factor of K. Given the same amount of time and computational resources, it remains unclear what specific advantages can be achieved by increasing the values of B and K. Theoretical Claims: N/A Experimental Designs Or Analyses: Regarding the experiments conducted on TSP instances, there exist several recent studies (e.g., LEHD, Pointerformer, InViT) that have improved upon POMO (Kwon et al., 2020) for solving the TSP. Although SLIM is a recently proposed method included in this paper's comparisons, it performs worse than POMO in most experimental settings presented in Tables 2 and 3. Therefore, to better demonstrate the effectiveness of the proposed method, it would be beneficial to compare it against these recent state-of-the-art studies that have shown superior performance over POMO. Additionally, in the uniformly generated TSP instance experiments presented in Table 2, the node sizes used are relatively small compared to those evaluated in recent studies. It is recommended to perform comparisons on larger-scale instances. In Figure 3, POMO generates 20 solutions per instance, whereas POCO generates 128 solutions per instance. Due to this discrepancy in the number of solutions generated by each method per instance, it is difficult to consider this experiment as a fair comparison. Furthermore, the performance gap between POMO and POCO reported in Figure 3 is only 0.0013, which is too small to be considered a meaningful difference. Luo, Fu, et al. "Neural combinatorial optimization with heavy decoder: Toward large scale generalization." NeurIPS 2023. Jin, Yan, et al. "Pointerformer: Deep reinforced multi-pointer transformer for the traveling salesman problem." AAAI 2023. Fang, Han, et al. "Invit: A generalizable routing problem solver with invariant nested view transformer." ICML 2024. Supplementary Material: I have briefly reviewed the supplementary material. Relation To Broader Scientific Literature: None Essential References Not Discussed: Luo, Fu, et al. "Neural combinatorial optimization with heavy decoder: Toward large scale generalization." NeurIPS 2023. Jin, Yan, et al. "Pointerformer: Deep reinforced multi-pointer transformer for the traveling salesman problem." AAAI 2023. Fang, Han, et al. "Invit: A generalizable routing problem solver with invariant nested view transformer." ICML 2024. Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable comments. >**Q1:** it remains unclear precisely how sample efficiency is enhanced. We have analyzed POCO's higher sample efficiency in Section 5.4. We further precisely demonstrate significant improvements in sample efficiency by supplementing quantitative results. Note that we retrained models on TSP20/50 with $B=20/50$ & $K=8$, ensuring a fair comparison. POCO achieves faster convergence than RL when using the same numbers of samples and gradient steps: 1. POCO achieves approximately $48\times$, $20\times$, and $3\times$ higher training efficiency than RL methods on JSP, TSP20, and TSP50, respectively. Specifically, POCO requires only 2500 training steps, 10 epochs, and 70 epochs to match the gaps achieved by RL methods, which take 120000 steps, 200 epochs, and 200 epochs, respectively. 2. POCO achieves significantly superior solution quality within the same number of training epochs. On JSP, POCO attains 0.76%/4.12% lower gaps than RL/SLIM in just 5 epochs. For TSP20/50, POCO achieves 0.0076%/0.0687% lower gaps than POMO in 100 epochs. >**Q2:** it remains unclear what specific advantages can be achieved by increasing the values of $B$ and $K$. The advantages of increasing $B$ and $K$ under the same time and computational resources can be seen in Section 5.5. 1. Increasing $B$ improves performance while increasing memory usage. As shown in Figure 4(a), the gap reduces from 8.3% to 7.9% as $B$ increases from 32 to 512, plateauing at $B=256$. Beyond this point, increasing $B$ provides negligible improvement, as sampling more from $\pi$ rarely yields better solutions. This result demostrates our reasonable setting of $B=256$. 2. As presented in Figures 4(b-c), the recommended value for $K$ is 16, and it should scale proportionally with $B$. >**Q3:** comparison with recent SOTA studies. We would like to emphasize that POCO, like POMO, is a general and fundamental training algorithm. POMO serves as the backbone for most SOTA methods, so naturally, the backbone of these SOTA methods can also be replaced with the superior POCO. To further showcase this potential, we conducted additional experiments on InViT, which has demonstrated superior performance over LEHD and Pointerformer in its paper. Specifically, we adopt POCO to train InViT, but due to memory constraints, we set the batch size to 64 for InViT (also $B$ for InViT+POCO), with $K=8$. We retrained InViT with identical parameters. Results in Table A shows that InViT+POCO outperformes InViT on various distributions and larger-scale problems. **Table A: Gaps (%) on TSP** Method|U-1k|U-5k|U-10k|C-1k|C-5k|C-10k -|-|-|-|-|-|- InViT-2V|6.49|8.21|6.61|10.01|10.36|9.23 InViT-2V+POCO|**5.98**|**7.67**|**6.00**|**9.58**|**9.81**|**9.18** Method|E-1k|E-5k|E-10k|I-1k|I-5k|I-10k -|-|-|-|-|-|- InViT-2V|9.61|11.58|9.72|7.95|9.42|7.39 InViT-2V+POCO|**8.67**|**10.59**|**8.94**|**7.38**|**8.88**|**7.04** Distribution: Uniform (U), Cluster (C), Explosion (E), Implosion (I). The numbers behind represent the problem scales. Note that we report the results both for 300 epochs due to extended training time. Full results for 800 epochs will be provided in the subsequent discussion phase. >**Q4:** Due to this discrepancy in the number of solutions generated by each method per instance, it is difficult to consider this experiment as a fair comparison. For a fair comparison, we retrained models on TSP20/50 with $B=20/50$ & $K=8$, ensuring it samples the same number of solutions during training as POMO. Despite reducing $B$, POCO maintains performance comparable to $B=128$, with superior solution quality (see Table B), higher training efficiency (see response to Q1), and better generalization (see response to Q3). **Table B: Gaps (%) on TSP.** Method|TSP20|TSP50 -|-|- POMO (aug)|0.0016|0.0418 POCO (aug)|**0.0000**|**0.0085** >**Q5:** the performance gap is too small to be considered a meaningful difference. 1. For well-optimized small-scale or easy problems, such as TSP20, achieving further improvement, e.g., 0.0013% gap to the optimal solution, is quite challenging. However, the improvements become more pronounced for larger-scale or harder problems: (1) On larger-scale TSP100, POCO's gap 0.04% vs POMO's 0.14% (see Table 2); (2) On the harder JSP, POCO's gap 12.9% vs $SLIM_{MGL}$'s 16.5% on DMU benchmark (see Table 1); (3) On much larger-scale unseen TSPLIB instances with 500-1000 nodes, POCO's gap 22.44% vs POMO's 30.14% (see Table 3). 2. In the NCO field, such improvements are considered significant, as evidenced by literature from top AI conferences [1,2]. In [1], among 12 TSP cases, 10 cases (83%) show improvements over the second-best neural method smaller than 0.1%. Similarly, in [2], 6 out of 12 TSP cases (50%) have similar improvement levels. [1] Collaboration! Towards Robust Neural Methods for Routing Problems. NeurIPS 2024. [2] Learning encodings for constructive neural combinatorial optimization needs to regret. AAAI 2024. --- Rebuttal Comment 1.1: Comment: When considering sample efficiency, I believe it is more reasonable to focus on the total number of generated samples rather than the amount used for updates. Nevertheless, the other aspects have been addressed well overall. Therefore, I will increase my score by 1 point. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our response and for your valuable feedback. We are pleased to address your remain concerns. In our experiments, our POCO is fairly compared with state-of-the-art training methods using the same total number of generated samples, demonstrating POCO's higher sample efficiency. Specifically, POCO outperforms the state-of-the-art (self-labeling-based) SLIM on JSP, as shown in Section 5.4 in our submission; POCO surpasses the state-of-the-art (RL-based) POMO and InViT on TSP, as shown in the supplementary experiments in our previous response. Remarkably, in these comparisons, POCO and POMO/InViT both sample $D\times B$ solutions in each batch, but POCO uses only $D\times K$ solutions after uniform filtering for gradient updates, where $D$ is the batch size and $B$ is the number of sampled solutions per instance. This fair comparison further highlights POCO's significantly higher sample efficiency. In addition, the full results for 800 epochs on InViT, as mentioned in our previous response, are updated in the table below, consistently demonstrating POCO's higher efficiency and superior performance. The results of InViT are cited from its original paper [1], with $D=128$ and $B=1$. Due to memory constraints, we set $D=64$ and $B=1$ for InViT. For a fair comparison, we set $D=1$ and $B=64$ for InViT+POCO. **Table: Gaps (%) Across Different Distributions and Scales** |Method|U-1k|U-5k|U-10k|C-1k|C-5k|C-10k| |-|-|-|-|-|-|-| |InViT-2V ($D=128,B=1$)|6.15|6.88|6.18|9.32|9.07|9.02| |InViT-2V ($D=64,B=1$)|6.31|6.94|6.21|9.32|9.07|9.02| |InViT-2V+POCO ($D=1,B=64$)|**5.13**|**6.54**|**4.97**|**9.13**|**8.96**|**8.68**| |Method|E-1k|E-5k|E-10k|I-1k|I-5k|I-10k| |-|-|-|-|-|-|-| |InViT-2V ($D=128,B=1$)|9.11|9.92|9.32|6.63|**7.63**|6.78| |InViT-2V ($D=64,B=1$)|9.26|10.04|9.44|6.69|8.64|6.81| |InViT-2V+POCO ($D=1,B=64$)|**8.37**|**9.79**|**8.26**|**6.58**|8.57|**5.93**| "U": Uniform, "C": Cluster, "E": Explosion, "I": Implosion distributions. The numbers behind represent the problem scales. [1] InViT: A generalizable routing problem solver with invariant nested view transformer. ICML 2024.
null
null
null
null
null
null
Value-Based Deep RL Scales Predictably
Accept (poster)
Summary: This paper studies the scaling law in value-based RL methods. In particular, it provides a thorough analysis on how different components, such as batch size and number of gradient steps, affect the performance and computation budget. *** ## update after rebuttal I have read the rebuttal as well as other responses. I will keep my score. Claims And Evidence: Claims made in the paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: This paper has made extensive evaluation in both theoretical and empirical aspects. Theoretical Claims: There is no proof in the paper. Experimental Designs Or Analyses: Extensive empirical results are presented. Supplementary Material: I have reviewed the appendix. Relation To Broader Scientific Literature: None. Essential References Not Discussed: Essential references are adequately discussed. Other Strengths And Weaknesses: + I believe this work tries to handle a very important problem. Scalability is perhaps the biggest challenge in off-policy algorithms, and this study attempts to demonstrate that they can still be scaled effectively. The methodology used to identify the optimal hyperparameter settings is also interesting. + In practice, off-policy algorithms are considered to have poor scalability, not only due to the large batch sizes and excessive gradient steps but also because they require a buffer to store all the data collected so far, which leads to high memory usage. Additionally, the wall-time efficiency of off-policy methods is poor, as they often take much longer to train an agent compared to on-policy algorithms. Therefore, it would be more convincing if it could be demonstrated that value-based methods can scale in terms of both memory usage and wall-time efficiency. Other Comments Or Suggestions: Please see other parts of the review. Questions For Authors: + What is the wall-time efficiency of value-based algorithms in terms of scalability? For example, How long does the SAC algorithm take to complete a 5M environment-step training in DMC? + Does it improve the memory usage when using the optimal hyperparameter? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the feedback and a positive review of the paper. We are glad that you find our evidence clear and convincing. We answer your questions below. Please let us know if you find your concerns addressed, and if so we would be grateful if you would be willing to raise your score. We are happy to address any remaining concerns. > What is the wall-time efficiency of value-based algorithms in terms of scalability? For example, How long does the SAC algorithm take to complete a 5M environment-step training in DMC? The wall-clock efficiency of these algorithms is highly dependent on the implementation of both the agent and simulator, as well as the computing infrastructure. For an NVIDIA A100 GPU, UTD=1 configuration, and DeepMind Control Suite of tasks (which present a CPU-based simulator), training for 5M environment steps should take around 12 hours. In our study, we model compute by making it proportional to the product of the model size, training length, UTD, and batch size (equation 4.2 of our manuscript) akin to how we measure FLOPs for language model training. As such, our compute metric is highly correlated to actual wall clock requirements. To avoid confusion, we will add an explanation to our compute metric definition where we discuss its relation to wall clock requirements, as well as add wall clock graphs of our tested algorithms to the appendix. Thank you for the suggestion! > Does it improve the memory usage when using the optimal hyperparameter Our study uses a fixed memory size between all variants of algorithms (i.e. we use a fixed model and replay buffer sizes). As such, we left the study of memory/scaling interaction for future work. We hypothesize that the optimal or “good enough” hyperparameters indeed improve the memory usage of the algorithm, in a sense that it allows for training a successful agent with either less data or less parameters. Beyond the changes described above, we add a number of improvements suggested by other reviewers (e.g., additional quantitative analysis of proposed scaling curves, estimation of scaling laws for different levels of J), which we invite you to inspect under the following link: https://sites.google.com/view/value-based-rl-scales/ and have released the code here: https://colab.research.google.com/drive/1BaqvAMb6svGojAuiOV8qFAUrZQwfPlDg?usp=sharing. We hope that these changes increase the reviewers' confidence in our work. If so, we kindly ask to consider updating the score of our work. --- Rebuttal Comment 1.1: Comment: Thanks for the response, it has addressed my concerns. My feedback remains positive.
Summary: The main claim of the paper is that it is possible to predict optimal hyperparameters, data quantity and compute allocation for high budget from low-budget experiments. This is broken down into the following sub-claims: 1. The amount of data needed for a given performance is predictable as a function of the UTD according to power law defined in Equation 4.1. 2. The amount of data needed for a given performance is predictable as a function of the UTD according to power law defined in Equation 4.2. 3. For a given desired performance and budget, there exists a predictable “best” UTD that follows a power law in Equation 4.5. This relationship extrapolates to larger budgets. 4. The optimal choice of batch size and learning rate are predictable functions of the UTD. Claims And Evidence: **1. The amount of data needed for a given performance is predictable as a function of the UTD according to power law defined in Equation 4.2.** Intuitively, this first claim is mostly believable. The literature contains many examples that show there are sample efficiency benefits to higher UTD ratios, with diminishing returns when properly regularized. To defend this claim, the authors present Figure 2 (left). I do not find this figure sufficient to defend the claim for the following reasons: 1. It is unclear what $J$ is. 2. It is unclear whether the result holds for all values of $J$. 3. It is unclear how accurate the result is outside of visual inspection. 4. The scale of the y-axis follows an unknown exponential scheme that makes visual inspection difficult. 5. It is unclear how the result was averaged over the collection of DMC environments. 6. It is unclear whether the result holds over individual environments, or only the average. Furthermore, I’m not sure I buy that the result necessarily extends to some asymptote. For example, the challenges associated with offline RL and high UTDs have been well-documented [Kumar 2019, Levine 2020, Li 2023], requiring specific algorithmic modifications. If we were to push the UTD to some extreme values, we should expect to see either (1) collapse/divergence, where an algorithm never achieves a desired performance or (2) increasing regularization/algorithmic modifications (e.g., frequent resets, policy regularization) that would also impact the quantity of data required. There also appears to be some missing experimental details. Knowing that most methods do not scale naively to higher UTD values [Chen 2021, Li 2023, D’Oro 2023], what modifications were used with the base algorithms to obtain these results? --- **2. The amount of compute needed for a given performance is predictable as a function of the UTD according to a sum of power laws in Equation 4.2.** Similarly, this claim is believable based on prior results in the literature. To defend this claim, the authors present Figure 2 (right). I do not find this claim to be adequately defended. On top of my concerns mentioned above for Claim 1, - It is unclear how the compute is affected by the unknown variable quantity $B(\sigma)$ (“best choice” batch size for each UTD). As an aside, it is unclear to me where the “sum” of power laws appears in Equation 4.2, as it consistently solely terms multiplied together. --- **3. For a given desired performance and budget, there exists a predictable “best” UTD that follows a power law in Equation 4.5. This relationship extrapolates to larger budgets.** To defend this claim, the authors reference Figure 3 and Figure 1 (right). Again, I do not find that these figures provide sufficient evidence to defend the authors’ claims. - It is unclear how these results are generated. What is the process for determining data points? For example, the way I would expect a data point to be produced would be the following (this is my best guess, because the process is not documented). 1. Pick an environment and a UTD value. 2. Determine the best hyperparameters for this UTD value. 3. Run the algorithm until some performance J is obtained. 4. Report the (compute, data). However, in this process, neither compute, nor data is a fixed quantity. How are empirical values averaged over a collection of environments? This is highly exacerbated by the fact that different environments learn at very different rates. For example, for the OpenAI gym environments, the authors use SAC but only include 4 out of the 5 environments used by the original SAC paper (Table 1). Furthermore, the values of J seem arbitrarily selected. Based on the original SAC paper [Haarnoja, 2018], we can see that a score of 8500 on HalfCheetah is obtainable in fewer than 1M timesteps, but a score of 6625 on Ant would take more than 3M timesteps. Looking at more modern results, the authors of Cross-Q [Bhatt, 2024] report SAC results, where a score > 6000 is not obtained before 3M timesteps on Ant, but a score of 8500 on HalfCheetah is obtained in fewer than 1M timesteps. - Figures 3 and 1 (right) do not directly show UTD. While each of these points corresponding to optimal budget do correspond to some value of UTD, we do not know what that UTD value is, and obtaining this UTD value has been obfuscated by an unknown “best choice” batch size that varies between UTD values. We cannot use this information to verify Equation 4.5, nor the claim that this power law extrapolates. --- Continued in "other strengths and weaknesses" for Claim 4, due to a lack of space. Methods And Evaluation Criteria: The evaluation criteria appears to be visual inspection. I do not find this sufficient. I find the scope of benchmarks (3 domains) to be sufficient. I do think more than one algorithm per benchmark is necessary when attempting to prescribe sweeping statements. Theoretical Claims: N/A. Experimental Designs Or Analyses: It is unclear whether the experimental design is valid because they are not adequately described. See my issues with the claims made by the paper, and the lack of scientific rigor in the presentation of the results. Supplementary Material: I looked through the supplementary material to look for missing experimental details. Relation To Broader Scientific Literature: The contributions, if correct, provide an interesting use case for hyperparameter selection for large-scale experiments. I am not aware of any work in the RL space with similar observations, which would make this a valuable contribution as the community scales RL algorithms and applications. Essential References Not Discussed: No issues. Other Strengths And Weaknesses: Missing comments on Claim 4: **4. The optimal choice of batch size and learning rate are predictable functions of the UTD.** The authors defend this claim with Figure 4. Figure 4 suffers from the same lack of scientific rigour as many of the previous figures. 1. It is unclear how the best batch size or learning rate is determined over a collection of environments. 2. It is unclear if the results hold over different domains and algorithms. 3. It is unclear if the results hold over individual environments. 4. It is unclear how accurate the relationship is between UTD and best batch size/learning rate outside of visual inspection. 5. It is unclear if the results hold over different target performance levels or budgets. While I do find the results are more plainly presented than previous figures, and the trends appear to be more clear, I am not confident that there is sufficient evidence to claim the existence of a widely-applicable power law. --- I have additional concerns about the scientific rigor in Figure 1. 1. For the middle row (Budget extrapolation), there are an inconsistent number of points across environments. 2. Given this inconsistency, it’s unclear how the target performance J was determined for each point. 3. The exponential scale, and use of linear extrapolation makes the relationship appear linear at first glance, but the relationships in the top and bottom row are clearly non-linear. 4. For the right column, the line of best fit is based on points sampled on another line of best fit (each pareto frontier), rather than empirical samples. 5. There are an inconsistent number of seeds used. Standard error is not reported. --- References: - Kumar, Aviral, et al. "Stabilizing off-policy q-learning via bootstrapping error reduction." Advances in neural information processing systems 32, 2019. - Levine, Sergey, et al. "Offline reinforcement learning: Tutorial, review, and perspectives on open problems." arXiv preprint arXiv:2005.01643, 2020. - Li, Qiyang, et al. "Efficient Deep Reinforcement Learning Requires Regulating Overfitting." The Eleventh International Conference on Learning Representations, 2023. - Chen, Xinyue, et al. "Randomized Ensembled Double Q-Learning: Learning Fast Without a Model." International Conference on Learning Representation, 2021. - D'Oro, P., Schwarzer, M., Nikishin, E., Bacon, P. L., Bellemare, M. G., & Courville, A. Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier. In The Eleventh International Conference on Learning Representations, 2023. - Haarnoja, Tuomas, et al. "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor." International conference on machine learning. PMLR, 2018. - Bhatt, A., et al. "CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity." International Conference on Learning Representations (ICLR). 2024. Other Comments Or Suggestions: N/A. Questions For Authors: I do not think the scope of an ICML rebuttal would convince me to adjust my evaluation given the current critical lack of scientific rigor in the paper. For future revisions, the authors should aim to provide verifiable evidence of their claims. - Transparent experimental details to make the paper reproducible. - More complete and clearer presentation of the results, such as numerical analysis (including measures of uncertainty or standard error) and results on individual environments, and improving transparency on unknown quantities (e.g., variable "best batch size"). - Eliminate inconsistencies in evaluation protocol, removing elements such as: hand-picked values of J, inconsistent number of observations, inconsistent exponential axis scales. ---- ## update after rebuttal While I appreciate the authors attempts to improve the clarity and transparency of the results, the number of concerns I had in the original draft, on top of a number of unaddressed concerns, means I remain firm with my original score. **Quantitative analysis:** The paper makes significant claims based only on visual inspection. The authors have not addressed this issue during the rebuttal. **Arbitrary normalizing:** The authors use an unknown system for selecting environment scores for normalizing that may influence final results. Quote from original review below: > This is highly exacerbated by the fact that different environments learn at very different rates. For example, for the OpenAI gym environments, the authors use SAC but only include 4 out of the 5 environments used by the original SAC paper (Table 1). Furthermore, the values of J seem arbitrarily selected. Based on the original SAC paper [Haarnoja, 2018], we can see that a score of 8500 on HalfCheetah is obtainable in fewer than 1M timesteps, but a score of 6625 on Ant would take more than 3M timesteps. Looking at more modern results, the authors of Cross-Q [Bhatt, 2024] report SAC results, where a score > 6000 is not obtained before 3M timesteps on Ant, but a score of 8500 on HalfCheetah is obtained in fewer than 1M timesteps. **Poor clarity or missing details:** Numerous issues were detailed in the original review. For example, in Figure 2, the value of J is not described. In response, the authors mention that this value is described by a color chart in Figure 1. I do not think this is adequate organization or clarity (the reader must infer the value of J in Figure 2, by looking at Figure 1 and making a guess based on a color). **Missing experimental analysis:** The authors show that their power law provide good fits on the single environment in Isaac gym, but do not include results for individual environments in the other domains. If the results depend on aggregation across environments, and the aggregation is based on an arbitrary normalization, then I have concerns that these results are cherry-picked. I hope to see results that show otherwise. I hope these concerns are addressed in the next draft. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for the detailed review and feedback! We have added several new results & clarifications to address the concerns. Specifically, for the main points: - We have performed the requested additional analysis to improve clarity here: https://sites.google.com/view/value-based-rl-scales/ - We eliminated inconsistencies in evaluation protocol and report the additional results on the same link - To ensure transparency and reproducibility we have released the code: https://colab.research.google.com/drive/1BaqvAMb6svGojAuiOV8qFAUrZQwfPlDg?usp=sharing. We think that these changes substantially strengthen the paper and are glad that we concur on the vision of understanding how to predictably scale value-based RL. However, we also think some of the comments might stem from a misunderstanding of certain parts of the paper, and we attempt to clarify these below. We will update the paper to clarify these. Please let us know if you find your concerns addressed and if so, we would be glad if you are willing to raise your score. > what J is As specified on Line 139, J denotes the return of the policy normalized between 0 and 1000 (with normalization detailed in Table 1). > unclear whether the result holds for all values of J It does: https://sites.google.com/view/value-based-rl-scales/home?authuser=3#h.mlu2zpb9cilb > hand-picked values of J, inconsistent number of observations We standardized the number of observations in this plot to be 10 and standardized the values of J to be equally spaced in log space: https://sites.google.com/view/value-based-rl-scales/home?authuser=3#h.2u2k4w2tndmu. > It is unclear how accurate the result is. We provide an additional extrapolation result where we use our model to predict the data/compute required to reach J (https://sites.google.com/view/value-based-rl-scales/home?authuser=3#h.erwyxjer0f42). The error is only 7.8% and 10.6% for extrapolating toward larger data and larger compute, respectively. > y-axis follows an unknown exponential scheme Following prior work (Kaplan’20), y-axis follows a logarithmic scale. To improve readability, we also created a version that uses logarithm with base 2 (https://sites.google.com/view/value-based-rl-scales/home?authuser=3#h.8e53cf8zcw66). > unclear how the result was averaged (...), unclear whether the result holds over individual environments We shared fits across several environments to increase the generality and robustness of the results (paragraph in line 391 and Appendices B & D). On IsaacGym, we also show that it is possible to obtain a good fit on a single task. > challenges associated with offline RL and high UTDs Following DroQ, we use LayerNorm in our SAC implementation and BRO uses LayerNorm and resets by default. This allows higher utd. We further add a new experiment with high UTD of 64: https://sites.google.com/view/value-based-rl-scales/home?authuser=3#h.mo01yp5qtkz0. Here, we use the existing hyperparameter fits based on utd 0.25...8, extrapolating hyperparameters far beyond that, and we find that the data requirement fit is still accurate. > It is unclear how accurate the relationship is between UTD and best batch size/learning rate. We provide quantitative correlations here https://sites.google.com/view/value-based-rl-scales/home?authuser=3#h.9t39ptle0tkt. We also provide quantitative evaluation of a baseline approach that uses hyperparameters optimal for UTD=2 to extrapolate to larger UTD. > It is unclear how the compute is affected by the unknown variable quantity B(σ). It is unclear how the best batch size or learning rate is determined over a collection of environments. Best choice batch size is provided in Eq (4.6). The workflow used to obtain the value is provided in section 4.3. We use different hyperparameters per environment. > It is unclear how these results [for Eq 4.5] are generated. The description the reviewer provides is correct. Please also refer to Section 4.3 for exact workflow. > How are empirical values [for Eq 4.5] averaged over a collection of environments? This is described in l864. We normalize the data requirements by per-environment median. > Figures 3 and 1 (right) do not directly show UTD. Please refer to the corresponding points in Figure 2. > The line of best fit is based on points sampled on another line of best fit (each pareto frontier), rather than empirical samples. It is not possible to use empirical samples since the grid search includes only a small number of UTDs. Estimating scaling laws, however, enables us to optimize hyperparameters with higher precision than the grid search, similar to LLM literature (Kaplan’20, Dubey’24). > There are an inconsistent number of seeds used. We used the maximum number of seeds that was feasible given our compute limitations. Different algorithms allow different number of seeds to be run in parallel. --- Rebuttal Comment 1.1: Comment: Thank you for the response. To the best of my knowledge, I did not misunderstand elements of the paper (at least in the cases the authors have responded to). My concern remains with the fact that many details are missing or obscured. While I appreciate the additional figures, I remain concerned about the lack of quantitative and statistical analysis, how the environments are averaged together (see my concerns on environment scores in point 3), and lack of transparency on key experimental quantities. > what J is To be clear, my concern is not with the definition of J, but with the fact that the value of J is not described in the Figure. > We provide an additional extrapolation result where we use our model to predict the data/compute required to reach J … I find this figure unclear. How are you measuring error? How are you computing the extrapolations? > We shared fits across several environments to increase the generality and robustness of the results (paragraph in line 391 and Appendices B & D). I find the use of isotonic regression (Appendix D) to transform the data potentially concerning. The authors justify this usage in that they can make reliable predictions (unquantified), but is the accuracy of these predictions also based on transformed data? > On IsaacGym, we also show that it is possible to obtain a good fit on a single task. The experiments with IsaacGym only use a single algorithm and environment, does this result generalize? > Following DroQ, … These details need to be in the paper. > Best choice batch size is provided in Eq (4.6). The workflow used to obtain the value is provided in section 4.3. We use different hyperparameters per environment. What is the metric used to determine the best batch size? Optimizing for compute, performance, or training error will all significantly affect final compute results. What quantities are used by each environment in the Figures? > This is described in l864. We normalize the data requirements by per-environment median. How does this translate to the small set of empirical values presented in the Figures? What is the standard error on these predictions? --- Reply to Comment 1.1.1: Comment: Thank you for the quick response and for the time to help make our paper better. We present new results. Please let us know if these responses address your concerns. - “I remain concerned about the lack of quantitative and statistical analysis” We now added confidence intervals to extrapolation results (https://sites.google.com/view/value-based-rl-scales/). Please let us know if we should add some other analysis to fully resolve your concern. - “concerns on env. scores in point 3” Could you please clarify what concerns remain from point 3? As described in our response, the difference between different environments is accounted for via normalizing. - “lack of transparency on key experimental quantities.” We have provided extensive appendices, new analysis, and released the code, with details in Appendix B. Can you clarify if there are other ways we can improve transparency: we are committed to doing so. We provide detailed responses below > What is the standard error on these predictions? We provide a new statistical analysis of the UTD extrapolation here: https://sites.google.com/view/value-based-rl-scales/home?authuser=3#h.tw8rudtiurgq. Concretely, we computed the confidence interval via bootstrapping over seeds. P-values of the hypothesis that optimal UTD doesn’t depend on budget $f$ are 0.0003, 0.000009, and 0.001 on the DMC/Gym/IsaacGym, proving that there is a dependency. The relative errors of the predictions are 1.5%, 7.7%, and 1.1%. We provide a new analysis of the Pareto frontier here: https://sites.google.com/view/value-based-rl-scales/home?authuser=3#h.erwyxjer0f42. We computed the confidence intervals via bootstrapping budget values. For extrapolating towards higher compute and higher data, the p-values of the hypothesis that data requirement doesn’t depend on UTD are 7e-12 and 8e-12. The relative errors of the predictions are 7.8%, and 10.6%. > Value of J Figure 1 contains a color bar describing the value of J. We will clarify this as shown here: https://sites.google.com/view/value-based-rl-scales/home?authuser=3#h.kajfdp9z1yj1 > How are you measuring error? How are you computing the extrapolations? We reported the relative error of 7.8% and 10.6%. That is, RelativeError = (TrueValue - PredictedValue) / TrueValue. To compute it, we estimate the parameters of the fit (Eq 4.1) and then comparing predicted and held out true values. > Isotonic regression (Appendix D) Because the policy, initial states, and the environments can be stochastic, it is necessary to smooth the return curves to determine $\mathcal{D}_\textrm{J}$. However, the commonly used exponential or gaussian smoothing when reporting RL learning curves presents the need to tune the amount of smoothing. Qualitatively, isotonic regression is more reasonable and does not require a hyperparameter so it is preferred. Please refer to Figure 7 as well as this new visualization: https://sites.google.com/view/value-based-rl-scales/home?authuser=3#h.niwnmjq1h40d. Finally, we would also like to note that we do not see the use of isotonic regression by itself as a concern since this choice still allows us to fit relationships between hyperparameters, data and compute budgets that still extrapolate to producing good raw scores at scale. Hence it does not inhibit extrapolation or reliable predictions of raw scores. That said, if there are better ways to transform raw data, we can try them out. Note that since scaling laws for value-based RL are fairly unexplored, we are unaware of any standard practice. > The experiments with IsaacGym only use a single algorithm and environment, does this result generalize? We perform experiments with 3 algorithms and 3 domains. While we agree additional results would improve the paper, it is not feasible to add them due to space limitations. > These details need to be in the paper. We are not able to revise the submission on openreview. We will add the details in the final version. > What is the metric used to determine the best batch size? An insightful question! As we note on l100, since the values of return can be arbitrary, we instead select the batch size to minimize data required to reach J=800. This is the same as minimizing the number of gradient steps. As the reviewer notes, this affects the FLOPs optimality. However, we decided to report this value as we believe minimizing wall clock time is more important than FLOPs. Building a more complete model of effects of batch size on optimal FLOPs is an interesting direction of future work. > What quantities are used by each environment in the Figures? We provide the law for batch size in table 3 and specific values here: https://sites.google.com/view/value-based-rl-scales/home?authuser=3#h.3yvov5uicmra > How does this translate to the small set of empirical values presented in the Figures? In the multi-environment experiments, we fit the normalized average of the individual environments. This is described on l711.
Summary: The authors demonstrate that value-based deep RL scales predictably, showing a Pareto frontier controlled by the updates-to-data (UTD) ratio. This paper shows how the optimal hyperparameters can be predicted from low-cost experiments, enabling an extrapolation to higher data or compute experiments. Validation is done using algorithms like SAC, BRO, and PQL across multiple benchmarks. Claims And Evidence: The claims are supported by a large amount of empirical evidence. The scale of the experiments lead to convincing results. Methods And Evaluation Criteria: Although the evaluation criteria is fitting for the problem at hand, I would have liked to see some experiments on harder, pixel-based environments such as Atari. It would be interesting to see if these scaling laws hold with increased environment or network complexity. Theoretical Claims: The claims are empirically derived. However, the derivations seem sound. Experimental Designs Or Analyses: - Supplementary Material: - Relation To Broader Scientific Literature: The biggest improvement over broader scientific literature is that the authors also study the tradeoff between available data and compute. Essential References Not Discussed: - Other Strengths And Weaknesses: Strengths: Clear, practically helpful paper with a lot of empirical evidence. Weaknesses: I will not claim that the experiments are limited, but, as said before, I would have liked to see some harder, pixel-based environments. Other Comments Or Suggestions: I would like to see some experiments on a more complex environment such as Atari. Although I know this might seem compute intensive, the recent PQN [1] algorithm seems to be very fast. [1] Gallici et al, 2024. Simplifying Deep Temporal Difference Learning. Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the review and the positive feedback. Please let us know if the response below addresses your concerns, and if there are any concerns remaining. > Although the evaluation criteria is fitting for the problem at hand, I would have liked to see some experiments on harder, pixel-based environments such as Atari. It would be interesting to see if these scaling laws hold with increased environment or network complexity Our research vision is to understand the impact of various design choices on scaling of value-based RL algorithms. This includes a number of choices such as the choice of environments, networks, replay buffers, algorithms, etc, but unfortunately due to the limited compute budget we were left with the choice of only being able to study a few design choices only that could fit in the scope of the first paper along this direction. Therefore, we decided to first start from environments that are cheap to run (state-based) and considered different network architectures including small vanilla MLPs (used by SAC and PQL) and relatively big resnet-based models (BRO) in their specific domains. We agree that expanding to pixel-based environments would further strengthen the generalizability claim. While this is costly to do, we are happy to add preliminary results with a subset of Atari environments in the camera-ready version. In addition, in the rebuttal we have added a number of improvements suggested by other reviewers (e.g., additional quantitative analysis of proposed scaling curves, estimation of scaling laws for different levels of J), which we invite you to inspect under the following link: https://sites.google.com/view/value-based-rl-scales/ and have released the code here: https://colab.research.google.com/drive/1BaqvAMb6svGojAuiOV8qFAUrZQwfPlDg?usp=sharing. We hope that these changes increase the reviewers' confidence in our work. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. I would like to see the preliminary results with a subset of Atari environments in the camera-ready version. I will keep my positive score.
Summary: This paper investigates the scalability and predictability of value-based RL using TD learning. It establishes predictable, hypothetic equations between three key hyperparameters (batch size, learning rate, and UTD ratio) and shows that data and compute requirements for a given performance lie on a Pareto frontier. By modeling the tradeoff between data and compute, the authors predict resource needs and optimal hyperparameter settings for large-scale experiments based on low-budget data. Finally, they empirically demonstrate that such findings can be extended to algorithms like SAC, BRO, and PQL and domains such as DeepMind Control, OpenAI Gym, and IsaacGym. Claims And Evidence: The authors make several key claims regarding the predictability of scaling in value-based RL, which are well-supported, as the authors provide empirical fits, validate their model across different datasets, and show that extends to well-estabilished algorithms like SAC, BRO, and PQL. Methods And Evaluation Criteria: The method for proposing their claim and empirical relationships are well-constructed and appropriate for the problem. Their study clearly formulates and supports the predictability of value-based RL scaling. Theoretical Claims: No theoretical claims were made. Experimental Designs Or Analyses: The experimental setup is generally well-structured, but a key concern is the limited selection of baseline algorithms. The rationale for choosing SAC, BRO, and PQL is weak and not fully justified. For example, I don't think BRO is the state-of-the-art algorithms in Gym; SimBa [1] offers a more practical and performant alternative by simply modifying SAC’s architecture, which could help assess the impact of network design on the proposed predictability framework. Additionally, the paper does not compare against other well-established state-of-the-art methods such as MR.Q [2]. I believe resolving my concern would enhance the evaluation and strengthen the study’s claims. [1] SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning., ICLR'25. [2] Towards General-Purpose Model-Free Reinforcement Learning., ICLR'25. Supplementary Material: Yes. Reviewed all sections. Relation To Broader Scientific Literature: This paper extends scaling law research from supervised learning to value-based reinforcement learning (RL), providing a contribution towards a large-scale RL foundation model. Essential References Not Discussed: I think most references in mind were present. However, more works on scaling model size [1] and UTD ratio [2, 3] in RL could be included: 1. SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning., ICLR'25. 2. DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization., ICLR'24. 3. The Dormant Neuron Phenomenon in Deep Reinforcement Learning., ICML'23 Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: 1. The study establishes predictable scaling laws for batch size, learning rate, and the UTD ratio, which are well-supported by empirical evidence. However, other parameters such as model size, weight decay, and target network update rates are also known to influence training stability and performance in deep RL. I wonder if the authors have considered these configurations too. 2. It was surprising that the compute-data pareto frontiers (Figure 1, left) for OpenAI Gym seems to be much flatter than other environments. Do the authors have any intuition about this phenomenon? Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for valuable feedback regarding our work. In accordance with your suggestions, we have made a number of changes to our manuscript. These include expanded discussion of limitations and experimental design. We discuss these in detail below: > (other parameters such as model size, weight decay, and target network update rates are also known to influence training stability and performance in deep RL While studying a number of these factors is definitely along our vision of better understanding the behavior of scaling value-based RL algorithms along various axes, we were faced with a decision of only studying certain axes for the submission due to a limited compute budget. Therefore, we chose to initiate our study into this line of work with the most fundamental hyperparameters such as batch size, learning rate, and UTD values. We are now working to extend this line of work to study the impact of model size and target network update rates as the next piece of work along this research vision. We will also add the following discussion to limitations of this paper: _Whereas this study investigated the relationship between learning rates, batch sizes, and UTDs, there are a variety of other design choices and hyperparameters that could potentially impact the scaling profile of a given algorithm. In particular, previous works have shown that model size, weight decay, and target network update strategy can significantly affect the training dynamics between different UTD values._ > It was surprising that the compute-data pareto frontiers (Figure 1, left) for OpenAI Gym seems to be much flatter than other environments. Do the authors have any intuition about this phenomenon? Indeed, it seems that the effectiveness of UTD is benchmark-specific, where increased UTD leads to pronounced sample efficiency improvements in some environments, and substantially less pronounced in others. We believe that studying the underlying phenomena that lead to such differences is an exciting avenue for future research. > (...) The rationale for choosing SAC, BRO, and PQL is weak and not fully justified. SimBa [1] offers a more practical and performant alternative… While in general, one can study scaling laws for any value-based RL algorithm, we were faced with the tough choice of only studying a few algorithms due to limited compute. We chose SAC, BRO, and PQL because prior work has shown them to be effective in the respective benchmarks (Gym, DMC, Isaac Gym) under different UTD configurations. We will add this justification to the paper. Our framework is independent and can be applied to both BRO and SimBa, which are both methods focused on scaling the number of parameters. We observed consistent findings across a wide class of SAC-derived methods and therefore believe the results would also extend to SimBa. Due to compute constraints, we were unable to run SimBa experiments for rebuttal. We also note that our goal is not to argue that our scaling laws give rise to the best possible value-based RL algorithm or that we are able to attain the best possible results in our experiments, but rather our point is to simply show that we can predict the behavior of value-based RL in larger-scale settings using small-scale experiments. We believe that showing this for a certain class of algorithms is still valuable as a starting step in this direction of research. We believe future state of the art methods will use our framework to improve performance, closing the loop. > Additionally, the paper does not compare against other well-established state-of-the-art methods such as MR.Q Thank you for pointing us towards that work, which we now cite. We agree that studying the scaling capabilities of this general algorithm would be particularly interesting, however we were not able to include MrQ in our submission as it was arXived after the ICML deadline. > However, more works on scaling model size [1] and UTD ratio [2, 3] in RL could be included: Thank you for pointing these missing references, we added all of them to our manuscript. Beyond the changes described above, we add a number of improvements suggested by other reviewers (e.g., additional quantitative analysis of proposed scaling curves, estimation of scaling laws for different levels of J), which we invite you to inspect under the following link: https://sites.google.com/view/value-based-rl-scales/ and have released the code here: https://colab.research.google.com/drive/1BaqvAMb6svGojAuiOV8qFAUrZQwfPlDg?usp=sharing. We hope that these changes increase the reviewers' confidence in our work. If so, we kindly ask to consider updating the score of our work. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification! I have updated my score from 3 to 4. I really enjoyed reading this paper and would be happy to see it discussed in the community. One particularly interesting point—highlighted in Figure 5—is that small batch sizes tend to perform well in high UTD regimes. While this may be known to some practitioners, it is not widely recognized, and the paper presents it. I recommend citing the following relevant work: - Small Batch Deep Reinforcement Learning, Ceron et al., NeurIPS 2023 A minor suggestion regarding Section 4.2: the current framing suggests that batch size relates to overfitting and learning rate to plasticity. However, these two factors are highly intertwined, and the relationship is not strictly one-to-one. It may be clearer to frame this section as exploring how both batch size and learning rate jointly affect plasticity and overfitting, rather than assigning each concept to a single factor. This would help avoid potential misconceptions, especially for readers less familiar with the literature. --- Reply to Comment 1.1.1: Comment: We will update 4.2 accordingly and add "Small Batch Deep Reinforcement Learning" to our discussion for the camera-ready version. Thank you for updating the score and for helping to improve the quality of our manuscript.
null
null
null
null
null
null
Learn from Downstream and Be Yourself in Multimodal Large Language Models Fine-Tuning
Accept (poster)
Summary: This paper addresses the issue of catastrophic forgetting in the fine-tuning of multimodal large language models (MLLMs). The proposed method, Specialization via Importance Discrepancy Evaluation for Refinement (SPIDER), introduces an innovative approach for measuring parameter importance by utilizing the magnitude of frozen pre-trained weights and fine-tuning gradients. Based on this, SPIDER applies an importance-aware weight allocation strategy to prevent the pre-trained knowledge forgetting during fine-tuning. Experimental results demonstrate that SPIDER enhances downstream specialization performance while effectively mitigating generalization degradation. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. The methods are well evaluated. Theoretical Claims: The authors provide clear normalization boundary analysis for both the weight and gradient parts. Experimental Designs Or Analyses: The experimental results are comprehensive. The author conducts various experiments with different relative method in various downstream tasks. Supplementary Material: Author provides the method introduction and algorithm code. It is helpful to understand the method pipeline and the related methods details. Relation To Broader Scientific Literature: The author introduces a novel aspect to alleviate the forgetting during MLLM fine-tuning via both pre-trained magnitude and current gradient information. Essential References Not Discussed: NaN Other Strengths And Weaknesses: Pros: The SPIDER method presented in this paper is novel in that it does not rely on additional parameters. Instead, it utilizes gradients to guide parameter updates, influencing the update direction and making the model’s forgetting mechanism interpretable. Experimental results validate its effectiveness on LLaVA and VILA. Moreover, the approach is compatible with other optimization strategies, offering both simplicity and efficiency. Cons: - While SPIDER focuses on mitigating forgetting, its memory complexity may pose challenges for scaling. It is better to discuss the memory cost with the existing methods for better understanding. - The authors could plot the parameter modification scale with existing methods. For example, both dare and tailor select a partial element for updates. Besides, the notation table is also important to make the paper reading friendly. - The text in Table 4 is too small, and some parts are obscured. Please consider increasing the font size or adjusting the layout for better readability. - The meaning of “Architecture Couple” in Table 1 seems unclear. Please provide a more detailed explanation of the significance of SPIDER and its advantages in the context of “Architecture Couple”. - Why OKVQA, TextVQA, GQA, and OCRVQA are chosen as the datasets to evaluate the generalization ability? What do these datasets represent, and how do they contribute to assessing generalization? Other Comments Or Suggestions: Typos: “COCO-Capation” should be “COCO-Caption” in L124, L439, etc. Questions For Authors: -Please refer to Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer mGiB: Thank you for your valuable comments and constructive feedback. Below, we carefully address each of your concerns point-by-point, providing detailed explanations and additional evidence to clarify our approach. **Q1: Memory Complexity** (Other Strengths And Weaknesses) A1: For our method, we utilize the pre-trained weights and the current gradient to evaluate parameter importance for both generalization and specialization, which brings a certain memory cost increase with existing methods. However, our method achieves stable performance across different tuning scenarios, as shown in the following Table. We will add the memory complexity comparison in the final version. **Q2: Parameter Modification Scale** (Other Strengths And Weaknesses) A2: Sorry for the misunderstanding. We plot the parameter modification scale in the following Table. We will add the parameter modification scale and notation table in the final version. *Table: Comparison with relative methods. $M$ means selected parameters. Performance is the harmonic mean of generalization and specialization accuracy.* |Method |Full FT|L2-Reg|Grafting|Half FT|DARE|Tailor|Ours| |----|---|---|---|---|---|---|--| |Memory Complexity|$\mathcal{O}(\|M\|)$|$\mathcal{O}(2\|M\|)$|$\mathcal{O}(2\|M\|)$|$\mathcal{O}(\|M\|)$|$\mathcal{O}(\|M\|)$|$\mathcal{O}(\|M\|)$|$\mathcal{O}(3\|M\|)$| |Learnable Ratio|100%|100%|100%|50%|10%|10%|50% |Performance (VILA on Flickr30k)|55.16|48.33|49.58|59.52|53.66|54.56| **66.61** **Q3: Font size and Layout** (Other Strengths And Weaknesses) A3: Thanks for your advice. We will adjust the font size and layout to make it more readable! **Q4: Advantage Discussion for Proposed Method** (Other Strengths And Weaknesses) A4: Our method selects and optimizes the relative downstream important elements to adapt towards the downstream specialization while maintaining the generalization ability. Thus, our method does not bring additional architecture modification and does not couple with a specific model architecture. We will update Table 1 and provide a more detailed advantages discussion for our method in the final version. **Q5: Explanation and Contribution for Selected Datasets** (Other Strengths And Weaknesses) A5: The datasets OKVQA, TextVQA, GQA, and OCRVQA are obviously mentioned as the training datasets in the pre-training stage, making them appropriate benchmarks to evaluate multimodal large language models’ (MLLMs) generalization across diverse tasks. OKVQA examines external knowledge and common-sense reasoning, TextVQA and OCRVQA test understanding of embedded textual information, and GQA assesses compositional and logical reasoning. Together, these datasets comprehensively evaluate MLLMs’ generalization across different reasoning types and practical scenarios. We will provide a detailed explanation of the selected datasets in the paper. Thanks for your comments! **Q6: Typos** (Other Comments Or Suggestions) A6: Thanks for your advice. We will fix the typos and check the final version carefully. --- Rebuttal Comment 1.1: Comment: After reading the rebuttal, the authors have adequately addressed my concerns. I increase my score.
Summary: In this work, it focuses on the Multimodal Large Language Models (MLLMs) fine-tuning field and reveals the catastrophic forgetting on the pre-training knowledge. Authors assess parameter importance for both generalization and specialization, focusing on identifying downstream-important elements and performing critical-aware updates on selected parameters. Through a series of experiments, SPIDER is shown to effectively enhance task-specific performance, such as in image captioning and visual question answering, while preventing the degradation of generalization. This work offers an insightful contribution to improving fine-tuning practices for large, pre-trained multimodal models. Claims And Evidence: The claim made in the paper is clear. Methods And Evaluation Criteria: The authors respectively measure the downstream and upstream performances and further utilize the upstream-downstream mean value to measure the overall performance. Theoretical Claims: NaN Experimental Designs Or Analyses: The experimental results are comprehensive. The author conducts various experiments with different relative methods in various downstream tasks. Supplementary Material: I have reviewed the supplementary material. The pseudo algorithm is clear and easy to understand. Relation To Broader Scientific Literature: The method is interesting as it achieves better downstream performance via measuring the parameter importance discrepancy. Essential References Not Discussed: It is encouraged to add some discussions with incremental learning, as catastrophic forgetting is also a challenging and popular problem in incremental learning. Other Strengths And Weaknesses: Strengths: The approach of leveraging parameter magnitude and gradient norm to identify key task-specific parameters is compelling. The authors conduct extensive experiments across multiple datasets, including comprehensive ablation studies, to validate their method. Weaknesses: Given that SPIDER focuses on fine-tuning Multimodal Large Language Models (MLLMs), the authors should report both generalization and forgetting metrics, as well as specialization improvements, to provide a more thorough comparison with existing methods. As for the downstream dataset selection, the author should plot the detailed data introduction for their response type and which field they belong to. Other Comments Or Suggestions: The paper is written in a relatively complex manner, which may make it difficult for readers who are not experts in this specific field to fully understand it. It would benefit from improvements in clarity and accessibility, making it easier for a broader audience to grasp the key ideas and contributions. Questions For Authors: The authors should add more discussion with incremental learning as catastrophic forgetting is a popular research problem in that scope. Besides, relative forgetting and increase metrics are also curial for readers to understand your paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer CY1z: Thank you for your insightful review and valuable feedback. Below, we address your key concerns in detail, aiming to clarify and demonstrate the effectiveness of our proposed approach. **Q1: Discussion on Incremental Learning** (Essential References Not Discussed & Questions For Authors) A1: Incremental learning focuses on continuously learning new tasks that are distinct from previously learned tasks, with the requirement to classify all observed classes during the testing phase [1,2,3]. Direct optimization solely on the current distribution can result in catastrophic forgetting of previously acquired knowledge. Existing incremental learning methods broadly fall into three categories: parameter isolation methods [4,5], regularization-based methods [6,7], and replay-based methods [8,9]. In this work, we investigate catastrophic forgetting within the multimodal large language model (MLLM) tuning process, aiming to mitigate the degradation of generalization and enhance specialization performance. [1] Class-incremental learning: A survey, IEEE PAMI, 2024 [2] A continual learning survey: Defying forgetting in classification tasks, IEEE PAMI, 2021 [3] A comprehensive survey of continual learning: theory, method and application, IEEE PAMI, 2024 [4] Progressive Neural Networks, arXiv, 2016 [5] Dense network expansion for class incremental learning, CVPR, 2023 [6] Overcoming Catastrophic Forgetting in Neural Networks, PNAS, 2017 [7] Memory aware synapses: Learning what (not) to forget, ECCV, 2018 [8] icarl: Incremental classifier and representation learning, CVPR, 2017 [9] Co-transport for class-incremental learning, ACM MM, 2021 **Q2: Downstream Dataset Introduction** (Other Strengths And Weaknesses) For downstream evaluation, we selected datasets (**Flickr30k**, **COCO-Caption**, **IconQA**, **ScienceQA**) that exhibit limited zero-shot performance or were excluded from the pre-training stage. Flickr30k and COCO-Caption focus on text generation tasks within general domains. Specifically, **Flickr30k** emphasizes everyday activities and events, while **COCO-Caption** features more complex, diverse scenes involving multiple objects and intricate interactions. Both IconQA and ScienceQA involve multiple-choice question answering tasks. **IconQA** evaluates abstract and symbolic reasoning using visual questions based on icons and diagrams, whereas **ScienceQA** assesses multimodal scientific reasoning in educational contexts, incorporating textual and visual content from disciplines such as physics, chemistry, biology, and general science. We will incorporate a detailed introduction to these downstream datasets in our revised paper. **Q3: Writing Clarity and Accessibility** (Other Comments Or Suggestions & Questions For Authors) A3: Thank you for highlighting this. We will simplify the formulations and enhance the clarity of our writing in the revised manuscript. **Q4: Relative forgetting and increase metrics** (Questions For Authors & Other Strengths And Weaknesses) A4: Thank you for the valuable suggestion. We will incorporate relative forgetting and increase metrics into Tables 4 and 5 of the revised manuscript, explicitly reporting both generalization forgetting and specialization improvement to provide a clearer evaluation of method effectiveness. Thank you once again for your constructive feedback and valuable suggestions! --- Rebuttal Comment 1.1: Comment: Thanks for the detailed clarifications and discussions. After reading the comments from other reviewers, I believe this is a good paper for the community and I would like to keep my positive rating.
Summary: This paper introduces a novel and well-structured strategy to address the persistent challenge of catastrophic forgetting that arises during the fine-tuning of Multimodal Large Language Models (MLLMs). This method leverages a meticulous analysis of parameter importance discrepancies to guide the optimization process, ensuring that previously learned knowledge is effectively retained while allowing the model to adapt and specialize for downstream tasks. By carefully balancing the trade-off between generalization and specialization, this approach enhances the model’s ability to perform robustly across diverse scenarios without compromising its adaptability to new domains. Furthermore, this strategy provides a systematic framework for mitigating knowledge degradation, ultimately improving the model’s stability and performance in real-world applications. Claims And Evidence: Yes. This paper follows the relative requirements. Methods And Evaluation Criteria: Yes. It includes difference metrics. Theoretical Claims: The authors could specify the value range boundaries for sections 2a-2c and provide a rationale for selecting the sigmoid and normalization operations. Experimental Designs Or Analyses: Although the authors conduct experiments on multimodal large language models tuning with larger training epochs, it would be better to provide the depth analysis why full-ft deceases the downstream performance with larger epochs. Supplementary Material: Yes, it would be beneficial to clearly highlight and distinguish the differences between Regularization-based Optimization and Partial-based Updating solutions. Currently, the authors present both approaches together without explicitly illustrating their distinctions. Providing a more detailed comparison, such as through annotations or a dedicated discussion, would improve clarity and enhance the reader’s understanding of how these two methods differ in their optimization strategies and impact on the model. Relation To Broader Scientific Literature: The author leverages both pre-training and downstream knowledge to construct an importance discrepancy mask, effectively mitigating forgetting while ensuring specialization. This approach presents an intriguing solution that enhances Multimodal Large Language Models (MLLMs) by striking a balance between domain-specific adaptation and generalization capability, ultimately improving their applicability and robustness in real-world scenarios. Essential References Not Discussed: It would be helpful if the authors included references related to large model merging, as they make comparisons with the Tailor and Dare methods. Other Strengths And Weaknesses: This paper explores the issue of catastrophic forgetting in Multimodal Large Language Models (MLLMs) fine-tuning. The manuscript is well-written, structured, and easy to follow, with the proposed method being clearly presented and demonstrating novelty. There are several weaknesses for updates. -The authors should include a discussion of relative performance, particularly in comparison to Full-FT. Full-FT, while a widely adopted and straightforward fine-tuning solution, yields lower performance across various downstream tasks, as shown in the Table 4 and especially for the flickr30k and coco-caption. A more detailed analysis would provide a clearer context for evaluating propose method advantages. -The authors should add a detailed discussion with the foundation model merge works. Because author compare the proposed method with both Tailor and Dare. Other Comments Or Suggestions: Please refer to the weaknesses. Questions For Authors: If the authors address the issues mentioned in the weaknesses section, I would be happy to increase the score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer mmMS: Thank you very much for your affirmation of our work, as well as the insightful concerns and questions you have raised. We have carefully considered each comment and provided responses. **Q1: Value Range Boundary and Normalization Operation Rationale** (Theoretical Claims) A1: Sections 2a–2c measure parameter importance for generalization by leveraging the pre-trained weight magnitudes. Specifically, step 2a computes the absolute values of pre-trained weights, yielding a range of [0,$+\infty$). Step 2b normalizes these absolute values to a standardized range of ($-\infty$,$+\infty$). Finally, step 2c rescales the normalized values into the range (0,1) using a sigmoid function. Together, these steps systematically rank parameters according to their importance for generalization, facilitating a meaningful comparison with parameters important for specialization. We will provide detailed explanations and discuss the rationale behind these operations in the revised manuscript. Thanks for your advice! **Q2: Depth Analysis on Full-FT** (Experimental Designs Or Analyses & Other Strengths And Weaknesses) A2: Full-FT directly updates all parameters to adapt to the downstream distribution. However, specialized downstream applications typically have limited data compared to large-scale pre-training datasets. Thus, Full-FT often leads to increased overfitting, especially when training for more epochs, resulting in performance degradation on downstream tasks. We will include a detailed analysis of Full-FT performance in the revised manuscript. Thank you for tips! **Q3: Differences between Regularization-based Optimization and Partial-based Updating solutions** (Supplementary Material) A3: Regularization-based optimization and partial-based updating represent two distinct approaches to addressing catastrophic forgetting in multimodal large language model (MLLM) tuning. The regularization-based approach introduces stiffness regularization terms, carefully balancing generalization and specialization through controlled penalization. In contrast, partial-based updating selects only a subset of parameters to update, preserving the rest. Our work adopts the partial-based updating paradigm, measuring parameter importance for both generalization and specialization, and selectively updating parameters significant for downstream tasks. We will clearly and concisely discuss the differences compared to existing methods in our revised manuscript. Thank you for your valuable suggestion! **Q4: Large Model Merging Paper Discussion** (Essential References Not Discussed & Other Strengths And Weaknesses) A4: Large Model Merging is a promising paradigm that integrates task-specific models into a unified model capable of simultaneously handling diverse downstream tasks [1]. Existing methods typically represent each task expert using downstream-optimized models and rely on various importance metrics to select specific elements for merging [2,3,4]. However, post-merging operations often disrupt the originally optimized parameter space, limiting the specialized capabilities for individual tasks. In our work, we select relative specialization importance elements during the training process, which is compatible with integrates into the training trajectory and squeeze the specialized capabilities into the target parameter space. We will add large model merging works discussion in our final version. [1] Learn From Model Beyond Fine-Tuning: A Survey, Nature Machine Intelligence, 2024 [2] A Unified View of Delta Parameter Editing in Post-Trained Large-Scale Models, arXiv, 2024 [3] Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models, ICML, 2024 [4] REMEDY: RECIPE MERGING DYNAMICS IN LARGE VISION-LANGUAGE MODELS, ICLR, 2025
Summary: This manuscript addresses the catastrophic forgetting issue in fine-tuning MLLMs. It proposes SPIDER, which measures parameter importance based on pre-trained weight magnitudes and fine - tuning gradients. SPIDER uses Importance Discrepancy Measurement (IDM) to rank parameter importance and Importance Selection Mask (ISM) for selective parameter updates. Experiments on image captioning and VQA with models like LLaVA and VILA show that SPIDER effectively balances generalization and specialization, outperforming baseline methods. ## update after rebuttal The response has addressed all my concerns, and I will increase my score. Claims And Evidence: Overall, all claims are well-supported. However, one point that confuses me is the rationale behind simply comparing I[v] and G[v] after normalization and rescaling to determine which weights should be updated. This approach lacks sufficient justification. Specifically, could this method potentially lead to updates in nearly half of the weights? Given that general knowledge from pretraining typically outweighs specialized knowledge gained through fine-tuning, might this not be somewhat unreasonable or problematic? Methods And Evaluation Criteria: Yes Theoretical Claims: The paper lacks rigorous mathematical proofs. Experimental Designs Or Analyses: No Supplementary Material: No Relation To Broader Scientific Literature: Building upon the concept of catastrophic forgetting in neural networks, this paper proposes a novel approach to better balance generalized and specialized knowledge in MLLMs during fine-tuning, effectively mitigating forgetting. Essential References Not Discussed: The following paper is an important paper on overcoming the catastrophic forgetting of neural networks, but it is not cited or discussed in this paper. [1] Aljundi R, Babiloni F, Elhoseiny M, et al. Memory aware synapses: Learning what (not) to forget[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 139-154. Other Strengths And Weaknesses: Strengths: 1. Novel solution: Proposes SPIDER to address catastrophic forgetting in MLLM fine-tuning by measuring parameter importance, offering a new way to balance generalization knowledge and specialization knowledge. 2. Comprehensive experiments: Tests on multiple MLLM architectures and datasets, comparing with SOTA methods. Ablation studies and various metrics validate its effectiveness. 3. Practical features: Architecture-agnostic, compatible with different optimization functions, and requires no hyper-parameter configuration. Weaknesses 1. Memory issue: Ranking parameters by pre-trained weights and gradients increases memory usage, which may be a problem in resource-constrained settings. 2. Distribution sensitivity: The current approach determines parameter updates by simply comparing the normalized and rescaled values of I[v] and G[v], potentially leading to updates in nearly half of the parameters. Consequently, extensive parameter updates may occur, resulting in decreased generalization ability even when the downstream task distribution differs only slightly from the pre-trained distribution. 3. Lack of comparison with traditional methods: Catastrophic forgetting is not limited to the field of large models. There have been many studies on this aspect in neural networks, such as ref[1]. This article lacks a comparison with this traditional method. [1] Aljundi R, Babiloni F, Elhoseiny M, et al. Memory aware synapses: Learning what (not) to forget[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 139-154. Other Comments Or Suggestions: 1. Consider how to improve the comparison between I[v] and G[v] so that the problem of generalization perturbation caused by fine-tuning under the same distribution can be solved. Specifically, in the case of close distribution, further dynamic reduction of update parameters may be difficult to achieve in the short term. 2. Adding missing traditional references and comparative experiments to overcome catastrophic forgetting. Questions For Authors: See "Other comments or suggestions" for details. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer mG7E: We sincerely appreciate your time and effort in reviewing our paper. Through the detailed responses outlined below, we seek to fully address your concerns and provide transparency into our proposed approach. **Q1: Essential Reference Discussion and Comparison** (Essential References Not Discussed & Other Strengths And Weaknesses & Other Comments Or Suggestions) A1: We apologize for overlooking the critical reference. Memory Aware Synapses (MAS) [1] addresses catastrophic forgetting in lifelong learning by measuring parameter importance through output sensitivity and applying a regularization penalty to prevent substantial updates to critical parameters. In our experiments, we selected OKVQA as a representative dataset to estimate parameter importance for generalization and compared MAS with different regularization hyperparameters ($\lambda$), as detailed in the table below. The results show that MAS outperforms other regularization-based methods; however, it provides limited trade-offs between generalization and specialization in the context of multimodal large language models (MLLMs). This limitation arises because the regularization terms cannot effectively maintain stable parameter constraints in large-scale models, negatively affecting overall performance. We will expand upon these observations and their implications in our revised manuscript. *Table: Performance comparison with relative methods. We conduct experiments on the VILA architecture and tune on the Flickr30k dataset.* |Method |OKVQA | TextVQA | OCRVQA | GQA | $\mathcal{A}^S$ | $\mathcal{A}^T$| $\mathcal{H}$| $\mathcal{O}$ | |----|---|---|---|---|---|---|--|--| | Zero-shot | 55.60 | 60.30 | 68.20 | 61.47 | 61.39 | 55.43 | 58.26 | 58.41| | Full FT | 37.99 | 45.17 | 53.85 | 51.14 | 47.04 | 66.68 | 55.16 | 56.86| |L2-Reg| 34.59 | 25.89 | 47.20 | 49.48 | 39.29 | 62.77 | 48.33 | 51.03 | | MAS ($\lambda=0.01$)| 34.86 | 40.45 | 46.50 | 47.61 | 42.35 | 63.63 | 50.85 | 52.99 |SPIDER (Ours)|47.11 | 53.38 | 65.55 |55.57 | **55.40** |**83.49** |**66.61**| **69.45**| [1] Memory aware synapses: Learning what (not) to forget, ECCV, 2018. **Q2: Distribution Sensitivity Effect on Parameter Importance Difference** (Essential References Not Discussed & Other Strengths And Weaknesses & Other Comments Or Suggestions) A2: Our method leverages pre-trained weights and current gradient updates to separately construct parameter importance rankings for generalization and specialization. In our first module, we identify elements relatively important for downstream tasks based on these rankings. Furthermore, we note that naively updating candidate parameters does not reflect parameter-level variance effectively (Eq. 6). Therefore, we propose the Importance Selection Mask, which dynamically allocates higher update rates to parameters exhibiting greater importance discrepancies, as defined in Eqs. 7-8 and illustrated on page 5. Consequently, our approach dynamically restricts parameter updates when the downstream task distribution closely aligns with the generalization distribution, effectively preserving generalization capability.
null
null
null
null
null
null
LOCATE 3D: Real-World Object Localization via Self-Supervised Learning in 3D
Accept (spotlight poster)
Summary: This paper proposes a method called Locate3D, which addresses the task of referring expression-based localization in 3D. The method consists of three key stages: (1) pre-processing stage in which an input RGB-D stream is processed using 2D foundation models SAM, CLIP and DINOv2 in order to obtain 2D features which are then lifted to 3D following the ConceptFusion framework. The resulting point cloud representation associates each point with RGB, CLIP and DINOv2 features. (2) This representation is then used to first train a transformer-based encoder following a self-supervised learning framework called 3D-JEPA. This encoder is trained by using the self-supervised task of learning to predict per-point features in regions that are masked-out by a region sampling process. (3) Next, a decoder that takes as input the embedding representation obtained from the 3D-JEPA encoder, as well as a referring-expression is trained using the tasks of dense-3D segmentation as well as 3D bounding box detection. The model is trained using existing referring expression-based localization datasets as well as the LX3D dataset, which is a dataset proposed in this submission with the aim of extending existing 3D training datasets (ScanNet, ScanNet++, ARKitScenes) with referring-expression annotations in an automatized manner to obtain a referring expression dataset with a larger number and variety of scenes. ### Update after rebuttal The clarifications and discussions provided in the rebuttal have sufficiently addressed my concerns. I am still leaning towards acceptance, and will be keeping my original score (4: Accept). Claims And Evidence: 1. _[L011-L014, right column]_ The claim "They (existing 3D RefExp models trained on small benchmark datasets) often require human annotation at inference time in the form of detailed 3D meshes or object instance segmentation, making them difficult to deploy on real-world devices" is not fully accurate in my opinion. Such methods can use human annotated object masks for instance, but currently there are many strong 3D object instance mask predictors, 3D object detection methods and improved mesh reconstruction methods. The fact these methods were evaluated using GT boxes etc. does not necessarily make them unsuitable to real-world device deployment. I think this is important to clarify as this is critical for positioning the paper. 2. _[Title]_ "Locate3D: Real-World Object Localization via Self-Supervised Learning in 3D": This is not too critical and a relatively minor point: At the first glance, the title leads the reader to start with the assumption that the whole training process takes place within a self-supervised learning framework, whereas the proposed model actually indeed uses GT annotation data consisting of target object localization data paired with the referring expression queries. 3. _[L029-L033]_ It appears to me that the claim "Locate3D operates directly on sensor observation streams without requiring manual post-processing (e.g., 3D mesh refinement or ground-truth instance segmentations), making it readily deployable on robots and AR devices" would mean that given an RGB-D sequence, all processing and inference can take place in close-to real-time. However, I was unable to see an analysis or data on the runtime of the method. If my understanding is correct, the 2D feature computation using SAM/CLIP/DINO also takes place during inference time, and I would expect this part to take a few minutes. This could mean that the claims about real-world deployment readiness might need to be revised. 4. _[L086-089]_ Similarly, I find the claim "Locate3D does not require ground-truth region proposals, meshes, or surface normals, making it suitable for real-world deployment." partially incorrect if I have a correct understanding of the method. It appears to me that the method Locate3D indeed requires ground-truth region proposals, but only during training. But the claim I quoted sounds like the method never needs such paired annotations of segmentation and referring expressions. Please correct me if I misinterpret this statement or perhaps misunderstand this aspect of the proposed method. Methods And Evaluation Criteria: Proposed method is reasonable, successfully targeting the 3D referring expression-based localization task. Proposed method and its components are thoroughly ablated. Evaluation criteria, metrics and chosen benchmark datasets follow the commonly employed evaluation methodology for the task. There are also additional results on the proposed LX3D dataset. Theoretical Claims: N/A Experimental Designs Or Analyses: I found that the experimental design is well-founded in general. The paper follows the established benchmarks and evaluation methodologies for 3D-RefExp task. There are plenty of ablation studies analysing different aspects of the method: input features, encoder configurations and the use of different foundation models. Additionally, I also appreciated experiments on cross-dataset evaluation as well as real-world deployment with a mobile robot setup. Supplementary Material: I reviewed the supplementary video demonstrating robot trials for referring-expression based 3D object localization. I also reviewed some sections of the appendix, especially the sections which I thought were not sufficiently detailed in the main paper. I spent the most time reading Appendix C, D, F and G. Relation To Broader Scientific Literature: 3D-JEPA: This framework very closely follows the original JEPA framework in ideation. I also have identified a few other extensions of the JEPA framework to 3D. Generally I find that the idea is not necessarily novel, but designing 3D-JEPA in the context of improving per-point features lifted from VLM models is a novel idea as far as I can judge. LX3D dataset is generally similar to existing benchmarks for 3D-RefExp task, but its key benefit appears to be the size of the dataset and the automatized generation of referring expression captions. However, such automatic caption generation methods were indeed proposed in prior work. The claims about the proposed contribution about achieving strong 3D-RefExp results "with fewer assumptions compared to prior models" need to be made a bit more sound for us to be able to evaluate the novelty of Locate3D a bit better. Essential References Not Discussed: Related works are generally well discussed, but there are a few works that could be added. For instance, ConcreteNet (Unal et al, ECCV 2024, _"Four Ways to Improve Verbo-visual Fusion for Dense 3D Visual Grounding"_) only takes a point cloud as input, which appear to be under very similar assumptions as this work. In L394-395 of the submission, it is mentioned that "few prior works operate under this setting", but according to my understanding ConcreteNet follows this setting as well. Other Strengths And Weaknesses: *Strengths* - The paper addresses a relevant and important task (3D referring-expressiong based localization). I also appreciated that the proposed method aims to target more realistic conditions with sensor PC instead of refined meshes. - The paper is generally written well with clarity. - I found that the proposed method is reasonable, and it is not an overly-complicated framework. - A very systematic and detailed evaluation of the method components is presented. - There are additional data contributions (if they will be publicly released) *Weaknesses* - I found that the level of detail regarding the 3D-JEPA framework could be improved as currently it reads a bit high-level. However, as this is one of the main proposed contributions of the paper, I would have expected to see a much detailed discussion on 3D-JEPA, as well as how it relates to the original JEPA framework, especially considering that many readers in the 3D scene understanding domain might not necessarily be familiar with the JEPA framework. - I did not find a thorough discussion on the limitations of the proposed method. Other Comments Or Suggestions: *Minor Comments* - 3D-JEPA term and what it stands for is never explicitly explained, at least not in the main paper. While there is a small reference to the JEPA framework (Assran et al.) in L135, this is still very unclear, as the broader audience/3D scene-understanding community might not be familiar with this particular framework. *Typos and grammatical issues:* - _L094 (left column) "Finally, it also successfully localised objects ...":_ "localises" might be better suited to the flow. - _L117 (left column) "The point cloud_ $\mathrm{PtC_{lift}}$ _can be directly use for object localization ...":_ should be "used". - L676 (appendix) "... figure 5 ...":_ should be "Figure 5". - _L1038 (appendix) "... while the PCA projections reveals that the encoder ...":_ should be "reveal". - _L1039 (appendix) "... while the decoder learns more sharped localized features ...":_ the term "sharper" might be better instead of "more sharped". Questions For Authors: - Is the LX3D dataset proposed in this work planned to be publicly released? - I am a bit unclear about the inference setting. At inference time, does the method still require the RGB-D stream and does it still need to use SAM/CLIP/DINO to obtain initial per-point features? If so, I would expect the feature extraction to take a few minutes for a given small/medium sized indoor scene, which might affect the claims regarding how much the proposed method is suitable for real-world deployment. As one of the main claims is "Locate3D operates directly on sensor observation streams without requireing manual post-processing (e.g., 3D mesh refinement or ground-truth instance segmentations), making it readily deployable on robots and AR devices" ([L029-L033]), how much time it takes to get the predictions is critical. - How is the generalizability of the method in terms of identifying novel classes affected by the contextualization of the features via the 3D-JEPA encoder? I understand that this process would enable the method to have less noisy features that are also more informative about the spatial relationships between scene objects. However, at the same time would this not result in a decline in novel-object localization performance? Also in the context of the 3D-RefExp task, I would expect the localization performance w.r.t. object-attribute based queries (such as "red chair") might also be negatively affected as the scene features now cross-attend to each other. Is there any component that is designed to addresses the preservation of object-attribute related features? - Generally, it would be great if a short discussion on the points listed in the "Claims and Evidence" section can be provided, especially number 3 and 4 (which mainly discuss the same aspect) - How is the mask and patch feature aggregation performed? I wasn't fully clear about this from L107. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer for the thorough reading of our manuscript and valuable comments. Below we address each point raised in detail **Clarification regarding our LX3D** We noted the reviewer mentions our dataset being automatically generated, we'd like to clarify that our dataset is human-collected. Annotators curated and validated every sample. We invite the reviewer to read more details about our annotation setup in Appendix D. **Runtime analysis and assumptions [L029-L033]** Indeed, extracting 2D features and lifting them to 3D is computationally expensive. We address this by caching the environment’s 2D features for each view, as well as the featurized point cloud. For ScanNet experiments, we compute this cache offline; and for robot experiments we compute it while doing the initial environment exploration phase. With this feature cache, a forward pass of our model takes ~1 second for a scene with ~100k feature points and utilizes ~8 GB of VRAM on an A100 GPU. We can utilize such caching because our benchmarks operate under static (ScanNet) or quasi-static (robot) environments. Extending our approach to dynamic scenes would require real-time 2D feature computation and continuous updates to the featurized environment. We believe the former is a matter of engineering, while the latter is an active research area, explored by methods like Lifelong LERF (A. Rashid, 2024). We will include these assumptions in our limitations section. **[L086-089] rewording** We propose rewording this claim to more accurately state: "Locate 3D does not require ground-truth region proposals, high-quality mesh sampled pointclouds, or surface normals at inference time, making it suitable for real-world deployment." **Clarification on positioning** We propose revising [L011-L014, right col] to clarify: "Many existing 3D RefExp models are evaluated on benchmark datasets with pointclouds sampled from high-quality post-processed 3D meshes or object instance segmentations already available. In contrast, Locate 3D operates directly on raw sensor observation streams without requiring intermediate processing steps like mesh reconstruction, instance segmentation, or pre-trained object detectors at inference time." We want to highlight that most previous work has been trained and evaluated utilizing mesh pointclouds, while our approach leverages pointclouds obtained directly from RGB-D sensors (please see our response to reviewer C37t on this topic). **Discussion of ConcreteNet** We will add ConcreteNet to the related works section and baselines in Table 1. We note that this method operates on ScanNet's mesh-sampled pointclouds rather than sensor pointclouds. While ConcreteNet achieves 56.12% Acc@25 on ScanRefer, our approach demonstrates superior performance (59.9%) under the more challenging setting. **3D-JEPA exposition** We appreciate the observation. In our revision, we will provide a clearer introduction to this concept in section 2.2 explaining (1) The core principles of the JEPA framework (2) Why adapting this approach to 3D pointclouds (3) Main deltas to the original framework. **Discussion on Limitations** We will add a limitations section discussing (1) our current approach's constraints with dynamic environments (2) our method being limited to few-room environments (potentially addressed by exploring RoPE positional embeddings or similar techniques). **Typos** We appreciate the careful proofreading, we will correct all of these in the revised manuscript. **Release of LX3D dataset** We will publicly release the LX3D dataset, trained Locate 3D model, and 3D-JEPA backbone upon publication. **Generalizability (novel-object localization)** We believe both questions can be boiled down to how well CLIP features are preserved through 3D-JEPA pre-training. Our linear probing experiments in Section 4.2 address this, particularly on the "noun correctness" task 3D-JEPA achieves 73% accuracy compared to ConceptFusion's 66%. These results demonstrate that 3D-JEPA not only preserves but enhances the semantic understanding from foundation models. This pattern mirrors developments in language modeling, where contextualized embeddings (like BERT) consistently outperform non-contextualized word embeddings for both common and rare words. **Mask and patch feature aggregation** For 3D points observed in multiple views, we aggregate features by weighted averaging. We voxelize the pointcloud (5 cm voxel size) and compute a single feature per voxel by weighted averaging all contained features. Weights are calculated using trilinear interpolation based on distance to voxel boundaries. This weighted averaging approach is common practice in both our baselines (e.g., ConceptFusion) and traditional RGB-D mapping literature (Real-time 3D Reconstruction in Dynamic Scenes using Point-based Fusion - ICCV 2013). We explored alternative aggregation methods (max pooling, simple average, summation) before selecting weighted averaging. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the rebuttal. The clarifications and discussions provided in the rebuttal have sufficiently addressed my concerns. I am still leaning towards acceptance, and will be keeping my original score (4: Accept).
Summary: This paper focuses on object localization via referring expressions in 3D real-world scenes to understand the 3D physical world, which is a valuable task in 3D perception and understanding. It introduces an end-to-end 3D transformer network to get point cloud, 3D features, and text query as input, and output the potential masks, box, and predicted text tokens. To get better 3D feature representation, it uses a novel self-supervised learning algorithm for 3D point clouds, 3D-JEPA, which is inspired by the 2D JEPA framework. The motivation is clearly explained in the paper. 1. A novel SSL method for 3D point clouds, 3D-JEPA. It demonstrates that by adopting the pre-trained features from 3D-JEPA, the global perception capability is greatly enhanced. It can also provide improvements for downstream locate3D tasks in both in-domain and out-of-domain scenarios. 2. An end-to-end 3D transformer model for the 3D referring expression task, locate3D. It leverages the overall performance on public 3D benchmarks like ScanRefer, SR3D, and NR3D. 3. An incremental 3D referring expression dataset, LX3D. The samples are originally captured in ScanNet, ScanNet++, and ARKitScenes, and expand the language annotations. Claims And Evidence: none Methods And Evaluation Criteria: M1. The idea of deriving btter 3D features from 2D pre-trained features seems somewhat unconvincing. How do you demonstrate that the 3D capability is obtained based on 2D pre-trained features. Although PTV3 is used as the encoder, the approach is still primarily based on 2D features and their spatial contextual proximity for supervision. Therefore, I have some concerns about the name "3D-JEPA". M2. What is the strategy for aggregating multi-view features for the same point? Does this strategy affect the accuracy of the final referring expression task? How do you handle multiple features from different perspectives for the same point? M3. During the 3D-JPEA training, to what extent can the mask be achieved? Have you considered mask the points with non-random yet reasonable sampling methods such as farthest point sampling ? Theoretical Claims: none Experimental Designs Or Analyses: In Table 1: E1. The comparison with GPT-related methods seems unfair. How is the VLM+baseline method implemented is not discussed in the paper. Did it utilize depth information? E2. How does it compare with other 3D-LLM methods, such as 3D-LLM[1], Leo[2], or related works, which have also been implemented on datasets like ScanRefer? E3. There is a lack of comparison with other 3D semantic alignment and scene-level features. In Table 2: E4. Is PTV3 trained from scratch or using pretrained weights? E5. Why does random initialization perform better than the 3D-JEPA fixed scenario? What is the context of the 3D-JEPA Frozen example, and please provide an explanation. [1] 3D-LLM: Injecting the 3D World into Large Language Models [2] An Embodied Generalist Agent in 3D World Supplementary Material: None. Relation To Broader Scientific Literature: None. Essential References Not Discussed: 1. 3D-JEPA: A Joint Embedding Predictive Architecture for 3D Self-Supervised Representation Learning Other Strengths And Weaknesses: None. Other Comments Or Suggestions: Please see above. Questions For Authors: Q1. I am very interested in understanding what additional information, at the feature level, is provided by 3D-JEPA training compared to the original 2D pre-trained features. Additionally, how can this be demonstrated experimentally? Q2. I am a little confused the metrics and settings in table1 when it compares to other GP4-4o methods. For those MLLM methods, how to locate the 3d bbox? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback. Their comments have helped us identify areas where we can better explain and justify our technical contributions. We address each comment in detail below. **Deriving better 3D features from 2D features; 3D-JEPA** The 3D nature of our approach comes from the architecture and learning objective rather than the individual pointwise inputs. Our model processes the entire featurized pointcloud using explicit 3D coordinates to produce per-point features. 3D-JEPA's objective forces the model to learn additional information beyond 2D features through masked prediction in 3D space, requiring it to infer features for occluded regions using contextual information from the surrounding visible 3D scene. In short, the "3D" in 3D-JEPA refers to the domain in which the representation learning occurs. Our approach has precedent in prior work like OpenScene, which demonstrated that distilling 2D vision-language features to 3D via contrastive learning results in 3D models that outperform the original 2D features. Empirical evidence shows 3D-JEPA features encode valuable information beyond lifted 2D features: 1. 3D-JEPA features demonstrate superior performance in linear probing experiments (Section 4.2) when compared to ConceptFusion features (34%→39% and 66%→73%) 2. Frozen 3D-JEPA features provide ~5% improvement over ConceptFusion inputs to the decoder (Table 2) 3. ConceptFusion (direct 2D-to-3D feature lifting) achieves 20% success on our robot evaluations compared to Locate 3D's 80% (Table 9) **Aggregating multi-view features** We voxelize the pointcloud at 5 cm resolution and aggregate multi-view point features within each voxel using weighted averaging based on trilinear interpolation distances. This aligns with standard practice in prior works (e.g., ConceptFusion); given space constraints, please also refer to our response to reviewer gKhM. **Farthest point sampling for masking** Yes, we tried farthest point sampling but did not see improvements. We additionally experimented with several other masking strategies as discussed in Appendix A.1. However, none of the methods improved over random masking (similar to the findings in Assran et. al., 2023). Thus, we use random masks for their computational efficiency. **Fairness of VLM baselines and usage of depth information** Yes, the two VLM baselines use depth information. In fact, they use the exact same inputs as Locate 3D (RGB-D + camera pose and intrinsics) to ensure fair comparisons. The VLM baselines are detailed in Appendix F. At a high level, these baselines use a modular pipeline that consists of first selecting an RGB frame using a VLM, then selecting an object in the 2D frame using GroundingDINO, SAM-2, and a VLM, and finally determining a 3D bounding box using depth and camera extrinsic and intrinsic information. **Comparison with 3D-LLM and LEO on ScanRefer** Locate 3D significantly outperforms 3D-LLM [1]. Specifically, 3D-LLM reports 30.3% Acc@0.25 on ScanRefer, while Locate 3D achieves 59.9% (Table 1). We will add results from 3D-LLM [1] to Table 1. LEO [2] is evaluated on ScanQA but not on ScanRefer (note that ScanQA assumes text outputs, while ScanRefer requires bounding boxes or instance masks). We will discuss LEO in related work. **Comparison with other 3D semantic alignment and scene-level features** Please refer to our previous response for a comparison of our approach to 3D-LLM, which trains a model to consume the entire scene. We are also happy to discuss additional related work if you might have concrete suggestions. **PTV3: Scratch vs pre-trained in Table 2** In our approach, PTV3 is pre-trained with 3D-JEPA. In Table 2, we study alternatives including “random” initialization (i.e., PTV3 is initialized from scratch), and initializing from “PonderV2” weights. In all three cases, the encoder is finetuned end-to-end for referential grounding. We will add these details to the Table 2 caption. **Table 2: Random initialization vs. 3D-JEPA (Frozen)** The better performance of random initialization compared to frozen 3D-JEPA can be attributed to the significant number of extra trainable parameters (~250M) that are optimized specifically for the referential grounding task. When we freeze the 3D-JEPA encoder, we only train the decoder and significantly limit the model's ability to adapt the encoder parameters to the task at hand. However, it's important to note that fine-tuning the 3D-JEPA encoder (our full model) provides the best performance, showing that while trainable model capacity is important, the pre-trained weights offer valuable initialization that leads to better results when allowed to adapt to the target task. **Table 1 clarifications (metrics, setting, VLM)** The VLM baselines (LLaMA and GPT-4o) use the exact same metrics and settings as other methods (such as Locate 3D) in Table 1. Specifically, the VLM baselines return 3D bounding boxes as detailed in Appendix F. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal, the replies address my primary concerns effectively. So I raise my rating as weak accept.
Summary: The paper proposes Locate 3D, a model for 3D grounding which achieves SOTA performance and strong out-of-domain generalization. Specifically, a novel self-supervised learning method, 3D-JEPA, is proposed, generating contextualized features for the scene. Also, a new dataset LX3D is proposed to test the robustness of the proposed model. ## update after rebuttal Please see the rebuttal comment below. Claims And Evidence: Yes. Methods And Evaluation Criteria: The proposed methods and evaluation are reasonable. Theoretical Claims: No theoretical claims and proofs involved. Experimental Designs Or Analyses: The proposed new dataset LX3D contains data from ScanNet and ScanNet++, where the visual data are similar. This poses some question in terms of the effectiveness on evaluating the robustness for out-of-domain data. Supplementary Material: The reviewer has checked all supplementary material, including the video. Relation To Broader Scientific Literature: This paper proposes a framework, which enables 3D grounding on posed RGB-D frames. It shows the real-world deployment on robots, and could potentially extend to AR devices. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: 1. The paper is well-written, with clear structure and illustrations. Weakness: 1. It's unclear how the authors define mesh PC and sensor PC. The reviewer notice the section C.3 in the appendix, but still get confused about the categories in Table 1. The authors could further clarify the input format for different methods, and the key difference for this paper. Other Comments Or Suggestions: For table 1, please bold the best result for Acc@50 on ScanRefer for consistency. Questions For Authors: 1. It's unclear how the authors define mesh PC and sensor PC. For the mesh pc methods in Table 1, the reviewer believe most of them are using point cloud as input, not mesh. 2. For 3D-VisTA, how do you evaluating under Mesh PC and Sensor PC + Proposals from Mesh PC settings? Why not Sensor PC and Sensor PC + Proposals from Mesh PC settings, which is more comparable. 3. Based on Figure 3, how do you choose the final prediction from the Q generated box? In the related work section (line 404-right column), the authors mention that 3D grounding approaches that 'provide probabilities for region proposals' is 'often difficult in 3D, and are prone to failures'. The reviewer is curious about the differences. 4. The proposed new dataset LX3D contains data from ScanNet and ScanNet++, where the visual data are similar. This poses some question in terms of the effectiveness on evaluating the robustness for out-of-domain data. Also from the results in table 4, the LOCATE 3D is not the best in multiple subsets. 5. In table 1, it is interesting that the proposed methods outperforms the existing methods in Acc@25, but not on Acc@50. Do the authors have some intuition why it is the case? 6. In table 3, it is unclear why the authors choose to use different visual backbones. Does CLIP+DINO indicates using different backbones for masks and image separately, or using both features for both masks and the image? If it is the former one, why using different backbones for masks and image? Why not directly using CLIP for both or DINO for both? The reviewer may adjust the final rating after rebuttal based on the clarifications from the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough feedback. Below we address each point in detail **Clarification on mesh PC vs sensor PC** By "Mesh PC" we refer to pointclouds sampled from carefully reconstructed 3D meshes that undergo extensive post-processing. Most prior works use this format as it’s ScanNet’s default. In contrast, "Sensor PC" refers to pointclouds obtained directly from RGB-D sensors. These contain raw depth measurements including noise, missing regions, and registration errors. This format better represents real-world deployment scenarios. Methods originally reported on Mesh PC suffer significant performance drops (8-10%) when trained and evaluated on sensor data (Table 1), 3D-VisTA drops from 53.1% to 45.9% Acc@25 and BUTD-DETR from 50.28% to 40.7%. This gap was also observed by Odin (Jain et al, 2024). Under the (more rigorous) evaluation setting comprising sensor pointclouds, Locate 3D achieves SoTA (61.7% Acc@25) performance. We will clarify this terminology in the paper. > For the mesh pc methods in Table 1, the reviewer believes most of them are using pointcloud as input, not mesh. The reviewer is correct – while these methods use pointclouds as input, we use the term "Mesh pointcloud" to indicate that these pointclouds are sampled from cleaned, post-processed meshes. **3D-VisTA evaluation clarification** 3D-VisTA was originally designed for mesh pointclouds with a two-stage architecture: first using Mask3D to generate object proposals, then processing these proposals with the pointcloud for grounding. Due to implementation constraints and pre-computed proposal dependencies, we report results using sensor PC for 3D-VisTA while maintaining the original Mask3D proposals from mesh PC. We note that **this represents an upper bound for 3D-VisTA's performance** on sensor data, as using sensor PC for proposal generation would only decrease performance further. Even with this advantage, 3D-VisTA achieves 45.9% Acc@25, while Locate 3D significantly outperforms it at 61.7%. **Question about query selection** For ScanNet benchmarks we select the query with the highest “token probability” for the target noun. In our robot experiments, we use an off-the-shelf LLM to parse the referring expression and identify the target noun before applying the same selection mechanism. > Authors mention that 3D grounding approaches [...] are prone to failures'. The reviewer is curious about the differences. These approaches, such as 3D-VisTA (Zhu et al., 2023) and PQ3D (Zhu et al., 2024), rely on external object detectors to generate proposals without considering the language query and then train their model to select one. If the object proposal misses, the entire model is bound to fail. Single-stage approaches like ours and BUTD-DETR (Jain et al., 2022), directly predict the bounding box by jointly considering the scene and language tokens. We will incorporate this discussion into our paper. **LX3D out-of-domain evaluation clarification (ScanNet vs ScanNet++)** We see key differences in scanning hardware, setting (partial scene vs. full scene), scene size, and distribution of queries between the train and eval settings. Further, our eval set contains both ARKitScenes and ScanNet++. While we agree that some aspects of the visual distribution remain constant (e.g. common object classes), we believe that the evaluation faithfully represents the domain gap present during a new deployment of our model to indoor scenes. As for your latter point (Locate 3D not being the best in multiple subsets) – we believe the VLM approach is strongest on ScanNet++ because each input contains only 5 frames, and Locate 3D not achieving maximum performance on FRE is an artifact of the small eval set size (~250 annotations on 1 scene). **Better perf on Acc@25 but lower on Acc@50** This discrepancy arises because PQ3D operates on mesh pointclouds, while our method works on sensor pointclouds. At lower IoU thresholds (Acc@25), our approach effectively predicts the bounding box despite sensor data imperfections. However, at higher thresholds (Acc@50), achieving precise alignment becomes more challenging, resulting in a more significant performance drop for sensor-based methods. In short, PQ3D is evaluated in a more relaxed setting than ours, and is at an advantage. Despite our best efforts, we could not re-train PQ3D with sensor pointclouds due to their use of multiple backbones and multi-stage training strategies, making a direct comparison difficult. **CLIP and DINO backbones in Table 3** Whenever we use CLIP features, we first obtain masks using SAM and then extract CLIP embeddings for these masks following Conceptfusion. **This approach is necessary because CLIP produces global image embeddings, not patch or pixel-level features**. For DINO, we directly use its dense patch-level features and map them to individual points without requiring a segmentation step. We will add this explanation to Section 2.1. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. The authors have addressed all my concerns. I have also read the reviews from other reviewers and authors' rebuttal. I would like to increase my rating to accept.
Summary: This paper introduces Locate 3D, a model for localizing objects in 3D scenes from referring expressions, achieving state-of-the-art performance on standard referential grounding benchmarks. The key innovation is 3D-JEPA, a self-supervised learning (SSL) algorithm to learn contextualized scene representations. Locate 3D outperforms existing methods on SR3D, NR3D, and ScanRefer with fewer assumptions. Additionally, the LX3D dataset (130K+ language annotations) boosts generalization. Experiments demonstrate the model's strong performance and potential for deployment in real-world systems. Claims And Evidence: The claims made in the submission are generally well-supported by clear and convincing evidence. However, the claim about deployment could benefit from stronger evidence or additional clarification: - “... making it readily deployable on robots and AR devices” is supported by the experiments in Sec. 4.4. While the paper demonstrates deployment on a robot, the AR application is not explicitly evaluated. Methods And Evaluation Criteria: The proposed methods and evaluation criteria generally make sense for the 3D object localization task, but there are some areas for potential improvement. - Locate 3D is designed for real-world deployment, but the current version of submission lacks the discussion of inference speed, memory usage, or computational cost. - I understand LX3D improves performance, but it is still unclear whether the improvement is due to data volume or diversity? Theoretical Claims: The paper primarily focuses on algorithmic and empirical contributions, rather than formal theoretical claims with proofs. For line 113-115, authors claim the feature dimension “combines RGB, CLIP and DINOv2 features”. It is unclear how to combine those features as this is essentially an explanation of $f_i$ in the corresponding equation. Experimental Designs Or Analyses: The experimental designs and analyses are well-structured and easy to follow. I have mentioned some points related to experiments in the previous sections to help strengthen this submission. In this part, it is also interesting to discuss how different 2D image features influence the performance of located 3D as it is mentioned in line 115-116 without experimental analysis. Supplementary Material: Yes, I reviewed the whole supplementary material. I suggest including some experiments mentioned above in the supplementary (e.g., AR applications). Relation To Broader Scientific Literature: The key contribution of the paper is closely related to some prior findings/results/ideas including (1) 3D object localization work, (2) Serf-supervised Learning (SSL) on 3D representations and (3) 2D foundation models. Different from previous 3D object localization work, Locate 3D operates directly on sensor observation streams, and works with fewer assumptions. 3D-JEPA is a SSL method inspired by previous SSL work like PointMAE, which is specially designed for 3D object localization in order to learn conceptualized representations of 3D scenes. Large-scale 2D foundation models like DINOv2 and CLIP are treated as tools by Locate 3D for a better capability of localization. Essential References Not Discussed: One of the core ideas of this work is to lift the general features produced by 2D foundation models. Therefore, I suggest discussing more about the existing work which conveys similar ideas in broader 3D scene understanding aspects. For example, OpenScene (CVPR’ 2023), Lift3D (CVPR’ 2024), etc.. Other Strengths And Weaknesses: Locate 3D has the potential to be a useful tool for some important downstream applications regarding higher-level 3D understanding since localization could be fundamental operations for scene understanding. Though the overall experimental structure in this submission is easy to follow, there are still some incomplete aspects regarding the LX3D dataset, especially considering that it is claimed as a core contribution. In this version of the submission, there is a lack of sufficient quantitative analysis (such as the aspects mentioned above) and qualitative analysis (e.g., annotation quality and granularity) of the LX3D dataset, which raises concerns about its reliability. Moreover, in the preprocessing stage, the authors mention using SAM to process images and obtain masks. However, SAM-generated masks do not ensure multi-view consistency, which is confusing to me. Using such masks for subsequent processing carries the risk of introducing excessive noise. Other Comments Or Suggestions: Overall, I believe this paper presents a useful tool for the community, but the experiments and analysis are not sufficiently thorough, which weakens some aspects of its contribution. However, if the authors can address my concerns, I would consider adjusting my score accordingly. Questions For Authors: I have the following key questions that, if addressed, could potentially change my evaluation of the paper: 1. Clarification on LX3D Dataset Analysis 2. Computational Efficiency and Scalability 3. Multi-View Consistency in SAM-based Preprocessing 4. Claimed AR related applications For the main concerns listed above, details can be found in the previous sections. For other minor concerns mentioned, I believe addressing them could also help strengthen this submission. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and constructive feedback. Below we address each point in detail. **Clarification about AR application** Indeed, we do not demonstrate deployment on an AR device. Our intended claim is that Locate 3D's input format is directly compatible with data streams from AR devices and robots - it operates on raw RGBD and camera parameters, which is the standard output format of these platforms. We demonstrate this compatibility through evaluation on diverse sensor-captured datasets (ScanNet, ScanNet++, ARKitScenes) as well as robot deployment. We will revise our phrasing to clarify that we're referring to input compatibility rather than explicit AR deployment. **Inference speed, memory usage, computational cost** We will add this discussion to the paper. Our inference pipeline operates in two phases. First, we compute and cache 2D features and a featurized pointcloud of the environment. This can be done offline for static environments like ScanNet, or during an initial exploration phase for robots. With this cache in place, our model performs inference in approximately one second for a scene with ~100,000 feature points with an average peak VRAM of ~8.1 GB on one A100. **LX3D dataset analysis; data volume vs diversity** We ran an additional experiment, in which we find that **scene diversity is a key factor in LX3D improvements**. Specifically, we compare two conditions: (1) ScanNet training data + 30K LX3D samples also from ScanNet and (2) ScanNet data + 30K LX3D samples from ScanNet++ (i.e., same quantity, but better quality). We find that training on better quality scenes (2) outperforms (1) by ~2%. We will include this additional experiment in a revision. The annotation quality of every sample LX3D was validated by human annotators. Specifically, only samples marked as unambiguously correct are included in the final dataset to ensure reliability, more details about our annotation setup and dataset can be found in Appendix D. Furthermore, we will publicly release the LX3D dataset upon publication, providing a high-quality asset to support future research in 3D referential grounding. **Combining RGB, CLIP, DINO features** The features are combined by concatenating CLIP and DINO features with a harmonically embedded representation of the RGB features. Specifically, CLIP features are generated for SAM masks, with a learnable token used when mask features are unavailable. Full details are provided in Appendix A.2 and we will add a summary to the main paper. For more details, we invite the reviewer to read the response to a similar question posed by reviewer *gKhM*. **Impact of different 2D image features** We provide experimental analysis in Table 3. We observe that features extracted from larger models (CLIP-L, SAM-H) consistently outperform those extracted from models with fewer parameters (CLIP-B, MobileSAM), with CLIP-L achieving 59.2% Acc@25 compared to CLIP-B's 53.7%. Additionally, concatenating features from CLIP and DINOv2 significantly improves results compared to using either of these in isolation, reaching 61.7% Acc@25, indicating that each of the features provide complementary information that benefits 3D object localization. **Discussing related work that performs lifting of 2D features (OpenScene, Lift3D)** We appreciate the suggestion. Approaches such as OpenScene (CVPR 2023), Lift3D (CVPR 2024), and prior ideas proposed in distilled feature fields (Neurips 2022), similarly lift 2D features to 3D via neural rendering or contrastive learning. However, we incorporate two key advances in addition to lifting 2D features. (1) our self-supervised 3D pretraining (3D-JEPA) approach contextualizes the lifted 3D features, (2) our specialized language-conditioned decoder enables precise referential object localization. Both (1) and (2) contribute significantly to our performance. As shown in Table 9, Locate 3D+ (8/10) substantially outperforms Concept Fusion (2/10), a zero-shot feature lifting method, on complex referential tasks. We'll add this discussion to the related work section. **Multi-view inconsistency of SAM masks** Correct, SAM masks are not multi-view consistent. This limits the performance of prior methods that use 2D features lifted to 3D in a zero-shot fashion. However, our SoTA results (in Table 1) demonstrate that **3D-JEPA effectively learns to deal with such noise**. Qualitatively, we find that 3D-JEPA features are smoother and more consistent than 2D lifted features (as shown in Figure 1), demonstrating (again) that our method is robust to such issues. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response, which has addressed my main concerns. And I will keep green light for this submission.
null
null
null
null
null
null
AutoAdvExBench: Benchmarking Autonomous Exploitation of Adversarial Example Defenses
Accept (oral)
Summary: This paper introduced a benchmark for evaluating LLMs' ability to autonomously bypass adversarial defenses. Unlike existing benchmarks, which predominantly comprise small-scale proxy tasks, this framework provides a more rigorous assessment of LLMs' capacity to replicate tasks typically conducted by security professionals. Additionally, the authors proposed a novel agent-based methodology for circumventing real-world adversarial defenses within the proposed framework. Claims And Evidence: The study makes the following key claims: (1) a large-scale benchmark dataset (redacted for review); (2) a strong LLM agent designed for security analysis; and (3) a benchmark framework that systematically evaluates LLM agents’ capabilities as security experts. These claims are substantiated through: (1) the authors’ commitment to releasing the benchmark post-review (2) comprehensive empirical results demonstrating the agent’s performance on the benchmark, as detailed in the paper; and (3) a rigorously designed methodology that validates the benchmark’s usefulness Methods And Evaluation Criteria: The authors' effort in constructing a large-scale dataset deserves recognition, encompassing both manual and automated methodologies. This involved systematically crawling papers related to adversarial attacks, followed by an extensive screening process. The resulting benchmark, grounded in real-world defenses, highlights the study’s commendable completeness in paper selection and methodological novelty. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: The experimental design and analytical methods employed in this study do not exhibit significant limitations, as the novel nature of the approach precludes direct comparison with established benchmarks Supplementary Material: Yes, it includes additional background information. Relation To Broader Scientific Literature: This benchmark holds potential for extension to medical imaging domains, which could advance robustness research in the medical field. Essential References Not Discussed: No. Other Strengths And Weaknesses: But out of scope of this paper, I think it can be improved for specifying detailed defense protocols and attack protocols for autonomously testing automated attacks~[1,2]. [1] Fu, Qi-An, et al. "{AutoDA}: Automated decision-based iterative adversarial attacks." 31st USENIX Security Symposium (USENIX Security 22). 2022. [2] Guo, Ping, et al. "L-autoda: Large language models for automatically evolving decision-based adversarial attacks." Proceedings of the Genetic and Evolutionary Computation Conference Companion. 2024. Other Comments Or Suggestions: See above. Questions For Authors: No further questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are glad the reviewer finds our claims substantiated, the completeness of our paper commendable and our method novel. We also thank the reviewer for providing relevant references. We will provide more details on attack and defense protocols for autonomously testing automated attacks in the camera ready.
Summary: This paper introduces AutoAdvExBench, a benchmark for evaluating large language models' ability to autonomously exploit defenses against adversarial examples. Unlike proxy security benchmarks, AutoAdvExBench directly measures LLMs' success on tasks regularly performed by machine learning security researchers. The authors curate 71 adversarial example defenses (47 real-world implementations and 24 "CTF-like" homework exercises) and design an agent to exploit them. Their key finding reveals a significant gap between benchmark types: their best agent achieves 75% success on CTF-like defenses but only 13% on real-world defenses, highligting the challenges of working with unstructured research code versus pedagogical implementations. The paper also shows that despite LLMs getting better, they can't comprehensively replace the machine learning security researcher yet. Claims And Evidence: The paper's central claim about the difficulty gap between CTF-like and real-world defenses is well-supported by the empirical results. The authors construct a compelling dataset through careful filtering of arXiv papers, manual verification, and reproduction of defense implementations. Their methodology for evaluating attack success is clear and follows standard practices in adversarial robustness research. The claim that this benchmark provides a more realistic assessment than previous security benchmarks is reasonable, though could benefit from direct comparison with existing benchmarks like Cybench. The paper lacks evidence that success on this benchmark would translate to novel research results beyond demonstrating that current models struggle with the task. Methods And Evaluation Criteria: The methods for benchmark construction are thorough and well-justified. The authors start with 612,495 arXiv papers and systematically filter down to 47 reproducible defenses. Their four-stage evaluation process (implement forward pass, make differentiable, implement FGSM, extend to PGD) follows standard adversarial attack methodologies. The evaluation metric (attack success rate measured by robust accuracy) also makes sense. Their agent design builds on established approaches for tool use and code generation, with a task-specific adaptation that improves performance. The decision to report results as CDF-like curves of defense accuracies rather than binary success/failure metrics provides more granular insight into model capabilities. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is sound. The authors compare multiple LLM backends (Claude 3.5 Sonnet, GPT-4o, o1) and explore combinations (Sonnet 3.5 + o1 supervision), which provides insight into different models' capabilities. Breaking down success rates by attack stage helps diagnose where models struggle. One limitation is the absense of a human baseline; the paper argues these defenses are challenging even for expert researchers but doesn't measure human performance on the same benchmark. This makes it difficult to contextualize the 13% success rate - is this far from human capabilities or relatively close (i.e., how would an individual human do, compared to the field overall)? Supplementary Material: No Relation To Broader Scientific Literature: The paper positions itself at the intersection of LLM evaluation and adversarial machine learning. It builds on agentic benchmarks like SWE-Bench while addressing a specific security-relevant task. The authors discuss how their approach differs from other security benchmarks by focusing on end-to-end tasks rather than proxy metrics. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: * The benchmark addresses an important gap in evaluating AI systems' ability to perform security-relevant tasks * The careful curation of diverse defenses represents significant effort and contribution to the community * Breaking down success by attack stage provides excellent diagnostic information about model capabilities Weaknesses: * The benchmark may have limited shelf life as models improve (this is acknowledged by the authors) * It'd be nice to actually have a study to get the human baseline number (although this would take additional work) Other Comments Or Suggestions: * It could be nice to see which specific defenses were successfully attacked, and to what extent they have common characteristics Questions For Authors: * Did you observe any correlation between attack success and defense complexity metrics (e.g., lines of code, number of dependencies, or code quality measures)? * Have you considered evaluating human performance (perhaps from security experts) on a subset of these defenses to provide context for the 13% model success rate? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are happy that the reviewer believes that our benchmark addresses an important gap and recognizes the significant effort and contribution. **Would success on this benchmark translate on novel research results?** Papers that included breaks of several adversarial example defenses were published at top-tier venues \[1, 2, 3\], so we believe that, if an agent were to fully solve the benchmark, it might find results worthy of publication. **Limited shelf life**. We agree that our benchmark, like all other benchmarks, has limited shelf life. However, we believe that, once the benchmark will be solved by an agent, we will know that said agent is very likely to be able to perform tasks at the level of a machine learning security researcher. **Human baseline**. A few of the defenses were already successfully attacked by human researchers, however, they were not all evaluated in the same threat model, so a fair comparison would be hard. In general, we would expect the success rate for a human researcher to be much higher than the current 13%. **Correlation between defense complexity metrics and attack success**. We will look into this and, if we find interesting insights, will add the results to the camera-ready version. **List of successfully attacked defenses**. We have already created a webpage (which we will reference from the paper only in the camera-ready version for anonymization reasons) that contains all the traces coming from the execution of the agents on all the defenses, including how successful each model is at attacking each defense. References: - [1] Anish Athalye, Nicholas Carlini, and David Wagner. *Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples*. ICML, 2018\. - \[2\] Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. *On adaptive attacks to adversarial example defenses*. NeurIPS 2020\. - \[3\] Francesco Croce, Sven Gowal, Thomas Brunner, Evan Shelhamer, Matthias Hein, and Taylan Cemgil. *Evaluating the adversarial robustness of adaptive test-time defenses*. ICML 2022
Summary: The paper proposes a new benchmark to evaluate LLMs and in particular their reasoning capabilities. The tasks purposed in the paper is an end-to-end real world task that consists in generating the code of a new attack based on and existing defence for image classification. In practice the LLM based agent has access to the paper of the defence, the implementation code, the perturbation bound and 1000 images. The evaluation metric is the success rate of the attack suggested by the agent. ## update after rebuttal This is an interesting paper that can have significant impact on the machine learning security research. Any newly proposed defence should from now be challenged against this LLM-based attack generation approach. All reviewers are positive and I am definitely supporting acceptance of this work. Claims And Evidence: The paper claims that benchmarks should consider real-world end-to-end tasks and not predefined exercises-like (i.e. CTF like) tasks, as the latest is not a good proxy for the former. This claim is verified in the experiments: existing approach are effective on exercises-like tasks, (75%) but the success rate vanishes for real world tasks (13%) while the goal of the task remain the same (generating the code for adversarial attack again a specific defence). The claim that the benchmark is challenging and can support future research is supported by the fact that current generic approaches and specialised approach (presented in 5.1) are still failing on the task of breaking defences (13% success rate). The claim that the benchmark is tractable is demonstrated in section 5.2. The paper proposes metrics for each of the four identified steps to solve the problem of the benchmark, and reports the quantitative results for each step. Methods And Evaluation Criteria: The evaluation criteria make sense and are inline with real-world expectations. My only concern regards the capacity of the fact that the evaluation metric (attack success rate) my not reflect the capacity of the LLM agent to solve the exact task but a similar task (e.g. developing a better attack) that has the same objective (augmenting the success rate of the attack). See weakness. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments are well-designed. Supplementary Material: I did not review the supplementary in details because the main paper provides enough evidence to suggest its acceptance. Relation To Broader Scientific Literature: The paper is well grounded in the two side of the literature it uses and contributes: image adversarial attacks and defenses and LLM benchmarks. Essential References Not Discussed: None that I know of. Other Strengths And Weaknesses: Strengths: - The paper is well written - The paper tackles the interesting problem of benchmarking LLM based agents on real-world multistep tasks. The method proposed in sound and could help the development of LLMs capable of reasoning on complex tasks. - I appreciate the real world task approach to develop benchmark. Weaknesses: - The evaluation metrics may be de-correlated from the defence mechanism. We have no guarantee on the method used by the attack to obtain a drop of the robust accuracy of the target model. How to guarantee this is the LLM by-passing the method in contrast to the LLM developing a stronger attack independently of the defence (i.e. regardless of the prompt). Other Comments Or Suggestions: The paper or the appendix could benefit from an end-to-end example of the defence at hand and the results of the LLM agent. Questions For Authors: No question Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are glad that the reviewer appreciates the real-world oriented approach of our benchmark. We address the questions as follows: **Evaluation metric**. Can the reviewer please clarify what they mean more precisely? Does the reviewer believe that the steps that we ask the models to do (i.e., implement a differentiable forward pass, then FGSM and then PGD) might be too restrictive, or that the model might find a way to hack the framework and increase the ASR without really breaking the defense? We believe that the guiding steps are broad enough for the model to implement any strategy that may be required to solve the problem. Additionally, we enforce an epsilon bound on the perturbations that the model can produce (see “Output” paragraph in section 4). This makes sure that the model cannot hack the evaluation by e.g. turning everything into random noise. **End-to-end example**. We have already created a webpage (which we will reference from the paper only in the camera-ready version for anonymization reasons) that contains all the traces coming from the execution of the agents on all the defenses. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your rebuttal and clarifying your plan to release a webpage with all execution traces. My point regarding the metrics: there are currently two ways the agent can increase ASR: 1. The agent "understands" the defense and breaks it with a simple specific “trick” (e.g. changing the loss/objective function). I expect this attack to significantly increase the ASR. 2. The agent develops a non-specific, stronger attack that only marginally improves the ASR. Do you consider the two approaches equivalent to evaluate the capacity of the model to solve the problem, or is the first option the more desirable? In the latter case, how can we check it specifically breaks the given defence?
null
null
null
null
null
null
null
null
Whitened CLIP as a Likelihood Surrogate of Images and Captions
Accept (poster)
Summary: ## update after rebuttal I will keep my score as is (3) This paper proposes Whitened CLIP (W-CLIP) that transforms the CLIP latent space, providing direct access through log-likelyhood function. W is computer only once, based on data, a-priories. Claims And Evidence: Yes. The whitening transform in section 3.2 and whitening CLIP embedding in section 3.3 contains convincing evidence. The authors did quantitative statistical experiments using Anderson-Darling and D’Agostino-Pearson tests, indicating the features in the whitened space can be well approximated by a normal distribution. Methods And Evaluation Criteria: Since this work is on CLIP embedding transformation - the paper would be much more stronger if the authors did evaluation on zero shot transfer, classification, OCR etc. downstream tasks similar to the original CLIP paper https://arxiv.org/pdf/2103.00020 Theoretical Claims: The output log-likelyhood is consistent with intuitive understanding - specific words have less log likelyhood vs common words, synthetic image with artifacts has less log likelyhood vs real image. Sound claims except it is not clear in line 897-900 why W matrix becomes unstable (and may not be invertible) if the features are highly correlated specially for text embedding Experimental Designs Or Analyses: The experiments are sound as in section 4. But I think this paper could have experiments to see how W-CLIP impacts the performance of downstream tasks such as image classification, OCR etc. Supplementary Material: Yes, all the sections A, B, C, D specifically A and D for reproducibility and W calculation Relation To Broader Scientific Literature: This work expands the original CLIP paper https://arxiv.org/pdf/2103.00020 . It transforms the CLIP latent space, ensuring each feature in the embedding space has unit mean, zero standard deviation and no correlation with other features. As a result, one can express this embedding as a direct log-likelyhood function which is a novel idea Essential References Not Discussed: I think "Hyperbolic Image-Text Representation" ( https://arxiv.org/abs/2304.09172) and "Embedding Geometries of Contrastive Language-Image Pre-Training" ( https://www.arxiv.org/abs/2409.13079) are two related work for this paper. They do not do what this paper suggests but seem to be a good reference for the related work since they show alternative representation of CLIP embedding. I do not see those two papers referenced anywhere unless I am missing something. Other Strengths And Weaknesses: Strength: 1. Transforms the CLIP latent space, ensuring each feature in the embedding space has unit mean, zero standard deviation and no correlation with other features 2. Formulating the CLIP method in log likelyhood 3. This log-likelyhood representation helps separate real image vs synthetic image with artifacts. The later having less log likelyhood which is intuitive 4. Very well written paper with clear instruction on reproduction. Weaknesses: 1. Lack of evaluation of W-CLIP on downstream tasks such as image classification, similarity search, OCR etc. Other Comments Or Suggestions: None Questions For Authors: 1. I think "Hyperbolic Image-Text Representation" ( https://arxiv.org/abs/2304.09172) and "Embedding Geometries of Contrastive Language-Image Pre-Training" ( https://www.arxiv.org/abs/2409.13079) are two related work for this paper. I do not see them referenced anywhere unless I am missing something. Wondering what do the authors think about these two work? What will happen if Whitened technique is applied to those two representation? 2. log-likelyhood representation helps separate real image vs synthetic image is interesting. Can it help separate synthetic image if there is no artifact? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and comments. Below, we address key concerns regarding the applicability of our approach to zero-shot settings, and address the reviewers questions. ### **Methods and Evaluation Criteria** To address the reviewer’s comment regarding zero-shot transfer to a downstream task, we conducted a large-scale experiment on the generated image detection task. The generated images were synthesized from text, taken from three benchmarks [1,2,3]. Images are generated using 20 generative models, totaling 100k generated images. Since these datasets include fewer real images, we supplemented real samples from the MSCOCO training set. Here, zero-shot refers to having no exposure to generated content and no task-specific training. We use only real images (from MSCOCO validation set) to compute the whitening matrix and do not fine-tune on this task. Thus, we compare against other zero-shot image detection baselines. We benchmark against four zero-shot detection methods: AEROBLADE [4] (CVPR 2024), RIGID [5] (arXiv 2024), ZED [6] (ECCV 2024), and Manifold-Bias [7] (ICLR 2025). We use official implementations for [4] and [7], and self implement [5] and [6] based on their papers and available code (of other models used in these methods). Each method outputs a continuous criterion score, which we binarize using a threshold from a small calibration set of 1k real images: **th = mean(C) + std(C)**, where C denotes criterion values. This calibration set is disjoint from the evaluation set. We use four metrics: AUC and AP (Average Precision) as separation metrics, and F1-score and Accuracy as classification metrics. Evaluation is based on 100k generated and 100k real images. As shown below, our method outperforms all baselines across all metrics. This setting follows the protocol used in [7], the most comprehensive of the compared works. Our results for baseline methods align with those reported in [7]. We believe this validates our method’s ability to distinguish real from generated images in a zero-shot setting. #### Performance Comparison | Method| AUC|AP|F1|Acc| |-|-|-|-|-| | AEROBLADE| 0.52| 0.48 | 0.64| 0.53| | RIGID| 0.51| 0.53| 0.28 | 0.52| | ZED| 0.69| 0.66| 0.69| 0.62| | Manifold-Bias| 0.85| 0.88| 0.76| 0.78| | Ours| **0.89** | **0.89** | **0.82** | **0.81** | While on some generative models methods [6] or [7] occasionally perform competitively, ours is more consistent across all generative models, whereas others experience significant drops on specific ones. Full per-model results are available here: [https://drive.google.com/file/d/1hQFwAqpo3opByTqC70mb4-fDz0z8TkVQ/view?usp=drive_link](https://drive.google.com/file/d/1hQFwAqpo3opByTqC70mb4-fDz0z8TkVQ/view?usp=drive_link) ### **Theoretical Claims** When CLIP embedding features are highly correlated, the covariance matrix can become ill-conditioned, yielding near-zero eigenvalues. During whitening, eigenvectors are scaled by the inverse square root of these values, so eigenvalues close to zero can result with a numerically unstable and non-invertible whitening matrix W. ### **Questions for Authors** 1. We thank the reviewer for highlighting two relevant papers, which we will cite in the final version. These works propose alternative, hierarchical embedding spaces to CLIP. The whitening transform is agnostic to the embedding space, as long as the data adequately represents it. In the context of these embedding spaces we assume this requires diverse samples, including many complex examples. For our likelihood approximation, a key factor is the distribution of each feature. As shown in Fig. 4.c, CLIP features exhibit a Gaussian-like distribution, which becomes standard normal after whitening, as validated in Tab. 1. If the original features of these embedders follow a different distribution, whitening may not yield standard normal features, undermining the assumptions of our likelihood model. 2. This question is addressed in the zero-shot detection experiment discussed above. We hope these additional results and clarifications further support our proposed approach. [1] Wang, Sheng-Yu, et al. "CNN-generated images are surprisingly easy to spot... for now." CVPR 2020.‏ [2] Ojha, Utkarsh, et al. "Towards universal fake image detectors that generalize across generative models." CVPR 2023.‏ [3] Zhu, Mingjian, et al. "Genimage: A million-scale benchmark for detecting AI-generated image." NeurIPS 2023 [4] Ricker, Jonas, et al. "AEROBLADE: Training-free detection of latent diffusion images using autoencoder reconstruction error." CVPR 2024 [5] He, Zhiyuan, et al. "RIGID: A training-free and model-agnostic framework for robust AI-generated image detection." arXiv 2024 [6] Cozzolino, Davide, et al. "Zero-shot detection of AI-generated images." ECCV 2024.‏ [7] Brokman, Jonathan, et al. "Manifold induced biases for zero-shot and few-shot detection of generated images." ICLR 2025
Summary: The authors propose the use of a whitening transform in the CLIP space, which offers an efficient solution in closed-form via the SVD. Using “WCLIP” they explore a wide number of practical downstream tasks one can tackle using the now-quantifiable likelihood (OOD image detection, caption complexity, quantifying artifacts, etc). ## update after rebuttal The authors provide useful clarifications during the rebuttal, and additional evidence of the usefulness of the proposed methodology. However, I maintain that work is needed on the experimental section to clarify the authors' key contributions--ultimately, I now lean towards a weak acceptance based on these considerations. Claims And Evidence: Yes — the authors do not make many claims in the paper, and the few small claims they do make are reasonable and sound. Methods And Evaluation Criteria: The method is well-formulated and makes sense theoretically. The absence of a need to train a mapping end-to-end (and tune the hyperparameters that come with such approaches) is a key strength of the authors' proposal to use the whitening transform. However, there is a lack of appropriate experimental results comparing the proposed method to alternative baselines for the tasks the authors explore (please see my comments in the “experimental design” section for more on this). Theoretical Claims: N/a — no theoretical claims are made in the paper. Experimental Designs Or Analyses: The authors explore multiple properties and downstream use cases of WCLIP. However, I am not convinced by the majority of the experiments. The authors did an insufficient job for motivating the practical benefits of the properties induced by the method, and the downstream tasks explored lack any comparisons to baselines to confirm that the proposed method confers any advantages over existing/simpler techniques. Concretely: **Text complexity**: the authors show that longer captions yield lower likelihood scores. However, I am confused about what insights this finding adds — CLIP is (presumably) trained with short image-caption pairs. Thus, I would argue the decreasing likelihood as a function of caption complexity likely predictably arises from the data itself, and likelihood here offers no unique insights. I fully expect to see the same relationship with CLIP’s original cosine similarity for short vs long/specific captions. What value does this analysis provide over the same analysis with the cosine similarity metric? The authors could perform the same experiments with the original CLIP cosine similarity to show why the likelihood is more informative, or why this is useful. **Uniformity:** the authors show in Figure 4: “the effectiveness of the whitening transform in achieving unit variance and zero correlation among features”. Isn’t this just by definition of using the whitening transform? There is no motivation for *why* this is a desirable property or the unique benefits of it. Furthermore, the authors next mention uniformity as a “desirable” property too, but do not elaborate at all on this point. **Data analysis**: In the authors’ experiments showing the use of likelihood to distinguish between real/fake/OOD data, they once again fail to benchmark against even a simple baseline to show why one would use this method over existing techniques—for example, what about simple k-means anomaly detection in the original CLIP space? I am not convinced there is any practical benefit of WCLIP over much simpler analyses currently. Supplementary Material: Yes, I went through the additional experiments included here. I did not pay much attention to the theoretical preliminaries describing the standard whitening transform, however. Relation To Broader Scientific Literature: I am not familiar with the wider literature for analysis of CLIP. However, even without specific knowledge of the literature, I am confident in my assessment of the paper as needing experiments comparing their method to simpler alternative baseline approaches to justify and motivate why one would use the whitening transform. Essential References Not Discussed: The paper appears to do quite a good job in the literature review. I don’t know the literature well enough to identify whether they have missed important works. Other Strengths And Weaknesses: I like the authors’ ideally conceptually — whilst i wouldn’t totally agree with the authors' characterization that it is “training-free” (perhaps one could say instead it offers a closed-form solution, based on the SVD), I do think the efficiency of the method is a key strength of the paper. However, as discussed above, the experiments in the paper need a lot of work. I would encourage the authors to prioritize depth over breadth here, and polish the motivation for the whitening analysis. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We value the reviewer’s critical insights and suggestions. Below, we provide detailed clarifications and supporting evidence addressing the concerns raised. ### **Experimental Designs or Analyses** #### **Text Complexity** One example in Fig. 5 does show higher likelihood for a shorter sentence (left), but this does not imply a bias toward shorter inputs. In Fig. 7c, our method's likelihood is shown to be unaffected by caption length, unlike the other LLMs and VLMs. Thus, our findings are contrary to the reviewer’s claim. In fact, in Fig. 7a and Tab. 4, we present a surprising finding: LLMs and VLMs show similar distributions for captions with and without nouns. As illustrated, the noun-free captions are semantically illogical. Since LLMs and VLMs are highly biased toward shorter inputs, the distribution remains unchanged. In contrast, our method—agnostic to caption length—assigns significantly lower likelihood values to the noun-free captions. Regarding cosine similarity as an alternative to likelihood: the two serve fundamentally different roles. Cosine similarity compares *two* inputs and outputs a scalar based on angular distance, whereas likelihood evaluates a *single* input. Therefore, cosine similarity is not suitable as a substitute for likelihood in this context. #### **Uniformity** - **Unit Variance and Zero Correlation**: As the reviewer notes, these properties are directly achieved through the whitening transform. While this is given in theory, this experiment empirically confirms that the transformation results in an isotropic space where features have zero mean, unit variance, and are uncorrelated. These characteristics, along with the normality verified in Sec. 3.3, are essential for computing likelihood using a multivariate Gaussian model (Sec. 3.4). - **Uniformity**: We agree that we did not elaborate on uniformity in the main paper, due to space constraints. Here we supply a more detailed explanation - uniformity in latent representations ensures that embeddings are evenly distributed across the latent space, preventing representation collapse where different inputs become indistinguishable due to overly similar embeddings [1]. A uniform space enhances discrimination between inputs, improving downstream tasks such as classification [2]. It also promotes generalization and avoids overfitting to specific latent regions [3]. In contrastive learning, uniformity complements alignment by pushing dissimilar inputs apart [1], resulting in a well-structured, robust, and generalizable representation space. We will add a more detailed explanation to the appendix and refer to it from the main paper. #### **Data Analysis** We refer the reviewer to our new experiment on zero-shot generated image detection (see response to Reviewer 531Y). Specifically in relation to cosine similarity, we note that both RIGID and Manifold-Bias use cosine similarity as part of their detection pipelines. However, our likelihood-based method is simpler, significantly faster (see table below), and outperforms both baselines. Moreover, other cosine-based methods [4,5] for this task are not applicable in a zero-shot setting. This experiment highlights both the effectiveness and practical advantages of our approach. To demonstrate the efficiency of our method, we report per-image inference times on a single A100 GPU for the zero-shot detection task: | **Method**| **Running time [sec per image]** | |------------------|------------------------------| | AEROBLADE| 4.66| | RIGID| 0.59| | ZED | 0.26| | Manifold-Bias| 1.66| | **Ours** | **0.05** | ### **Other Strengths and Weaknesses** #### **Not "Training-Free"** The reviewer raises a valid point: PCA in the whitening transform can be seen as unsupervised training. However, this is analogous to CLIP pretraining—performed once on unlabeled data and reused across downstream tasks. In our case, this step is extremely lightweight (under 1 second). As demonstrated above, leveraging the likelihood for a downstream task results with a very efficient method. We hope these clarifications and new experiment address the reviewer’s concerns and underscore the practical strengths of our approach. We would appreciate positively considering increasing the final rating of the paper. [1] Wang, Tongzhou, and Phillip Isola. "Understanding contrastive representation learning through alignment and uniformity on the hypersphere." ICML 2020 [2] Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." ICML 2020 [3] Balestriero, Randall, and Yann LeCun. "Contrastive and non-contrastive self-supervised learning recover global and local spectral embedding methods.” NeurIPS 2022 [4] Cozzolino, Davide, et al. "Raising the Bar of AI-generated Image Detection with CLIP." CVPR 2024 [5] Sha, Zeyang, et al. "De-fake: Detection and attribution of fake images generated by text-to-image generation models." ACM SIGSAC 2023 --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their thorough response! 1. **Text complexity & uniformity**: thanks to the authors for helping me interpret the results--my apologies to the authors for my misunderstanding here. This result is rather more interesting than I understood it to be! I do maintain my opinion that "depth over breadth" would be helpful in preventing confusion, however -- for example, to include the lengthier discussion here in this rebuttal in Sec 4.1., and to deprioritize some of the discussions about uniformity. 2. **Additional results**: the additional experiments on generated image detection in response to Reviewer 531Y are appreciated. Whilst this is not an experiment they ask for explicitly, I think the authors' response does highlight some additional practical value of the method, and outperforms some very recent baselines. Ultimately, I change now my rating to weakly support the paper (after reading the other reviewers' responses), whilst maintaining that the experimental section needs a re-work for the camera-ready version to hightlight the strengths of the method.
Summary: This paper studies the CLIP features to approximate the likelihood of images and captions. The presented method, Whitened CLIP (W-CLIP), uses an invertible linear operation to convert the CLIP features into a zero-mean, unit standard deviation space. This normalized embedding space can be used in many cases, including detecting artifacts in synthetic images, analyzing the domain drifts for different datasets, and enhancing image manipulation of two images. Claims And Evidence: This paper is clearly motivated and proposes a simple yet effective method with abundant experiments. Methods And Evaluation Criteria: Some key results lack quantitative metrics to validate the effectiveness of the proposed method. - While we have qualitative examples in Fig 2 and Fig 8 to show that W-CLIP can detect artifacts in synthetic images, we don't have a quantitative metric to know how good it is. Moreover, can W-CLIP embedding be used to tell the aesthetics or quality of images? - When using full circle SLERP to do image manipulation, we have qualitative examples in Fig 20 and Fig 21, but again, we don't have any large-scale evaluation to show the superiority of W-CLIP compared with CLIP. Theoretical Claims: This paper doesn't have any theoretical contributions. Experimental Designs Or Analyses: The experiment design is holistic including the image and text domain. However, this paper lacks some quantitative evaluation for some experiments. (See Methods And Evaluation Criteria part) Supplementary Material: This paper provided the codes and evaluation documentation, but I didn't run their codes. I also checked their appendix in the paper. Relation To Broader Scientific Literature: This paper claims they are the first to whiten the CLIP features for image and text likelihood analysis. I haven't tracked this domain closely, so I cannot verify their claim. I'll look at other reviewers' comment on this. Essential References Not Discussed: I haven't tracked this domain closely, so I cannot verify their claim. I'll look at other reviewers' comment on this. Other Strengths And Weaknesses: See other parts. Other Comments Or Suggestions: The figures in this paper are vague and have low resolution. For instance, subfigures in Figure 4 have small x-y axis labels and legends. Figure 3 is a good example, I highly recommend the authors redraw all the figures like Figure 3. Questions For Authors: I am open to discuss and would be happy to reaccess my rating during rebuttal. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's detailed and thoughtful feedback. Below, we address each point raised. ### **Methods and Evaluation Criteria** #### **First Point, first part** > *"While we have qualitative examples in Fig 2 and Fig 8 to show that W-CLIP can detect artifacts in synthetic images, we don't have a quantitative metric to know how good it is."* To address this, we conducted a large-scale experiment on zero-shot detection of generated images. Full details are provided in our response to Reviewer 531Y. A summary of the comparative results is given below: | **Method**| **AUC** | **AP** | **F1** | **Acc** | |--|-|-|-|-| | AEROBLADE| 0.52 | 0.48 | 0.64 | 0.53 | | RIGID | 0.51| 0.53 | 0.28 | 0.52| | ZED| 0.69| 0.66| 0.69 | 0.62 | | Manifold-Bias | 0.85| 0.88 | 0.76 | 0.78| | **Ours**| **0.89** | **0.89** | **0.82** | **0.81** | #### **First Point, second part** > *"Moreover, can W-CLIP embedding be used to tell the aesthetics or quality of images?"* We refer to Fig. 3 (top right), where an ImageNet-C experiment shows that norms of whitened embeddings respond to image corruptions (e.g., impulse noise). Additional results appear in Fig. 10 (Appendix B), covering 12 corruptions including blur, defocus, and low contrast. In all cases, corrupted images exhibit higher norms (in the whitened space) than the original images. As explained in Sec. 4.2 (lines 316–322), higher norms in the whitened space correspond to lower likelihood values, indicating reduced image quality. These results suggest that W-CLIP embeddings capture information related to image quality. #### **Second Point** > *"When using full circle SLERP to do image manipulation, we have qualitative examples in Fig 20 and Fig 21, but again, we don't have any large-scale evaluation to show the superiority of W-CLIP compared with CLIP."* To provide quantitative insight, we conducted a full-circle SLERP experiment on the MSCOCO validation set (5k images). For each image, we performed full-circle SLERP in both the CLIP and W-CLIP embedding spaces. In this process, a source image is interpolated toward a destination image along a circular path within the embedding space. Crucially, the image generated at the 180° position from the source—referred to as the **“opposite image”** (generated from the “opposite embedding”)—is invariant to the chosen destination and determined solely by the source. While other positions along the path are influenced by the destination embedding, the 180° embedding is a fixed, symmetric counterpart. We generate these opposite images using both CLIP and W-CLIP embeddings and observe a stark contrast: in the CLIP space, opposite images degrade into structured noise, whereas in the W-CLIP space, they remain visually natural and semantically meaningful, as shown in Figs. 20, 21. The structured noise produced by CLIP exhibits 4×4 pixel blocks and a restricted color palette (black ('0' in all color channels), white ('1' in all color channels), red, green, blue, magenta ('1' in red and blue channels), cyan ('1' in green and blue channels), and yellow ('1' in red and green channels)), suggesting synthetic artifacts. We provide visual examples (20 opposite images) in the following link: [https://drive.google.com/drive/folders/1Q85pz8y-36K2eHXDHsRcciblihvHgwgX?usp=drive_link](https://drive.google.com/drive/folders/1Q85pz8y-36K2eHXDHsRcciblihvHgwgX?usp=drive_link) To quantify these differences, we compute Total Variation (TV), Entropy, and the percentage of extreme saturation values (top or bottom 1% of the pixel range). All metrics are computed per channel and averaged per image across three sets: original MSCOCO images, CLIP opposites, and W-CLIP opposites. The results are summarized below: | **Method** | **TV** | **Entropy** |Saturation Values [%] | |-|-|-|-| |MSCOCO| 222.3| 7.3|4.2| |CLIP Opposite| 156.7| 4.8|55.5| |W-CLIP Opposite| 215.9| 7.2|6.4| These findings confirm that W-CLIP opposites are statistically similar to natural images, whereas CLIP opposites exhibit significantly reduced entropy and variation and much higher percentage of saturation values, indicating a lack of natural structure. ### **Other Comments or Suggestions** > *"The figures in this paper are vague and have low resolution. For instance, subfigures in Figure 4 have small x-y axis labels and legends. Figure 3 is a good example, I highly recommend the authors redraw all the figures like Figure 3."* We thank the reviewer for this helpful suggestion. We will improve the visual quality of all figures in the final version. Specifically, we will enlarge the legends and the axes ticks in Figs. 4, 6, and 7, and reformat histograms for clearer presentation, following the style of Fig. 3. We hope our additional experiments and clarifications address the concerns raised and further strengthen the validity and impact of our proposed approach. We would appreciate positively considering increasing the final rating of the paper. --- Rebuttal Comment 1.1: Comment: Hi, Thank the authors for their detailed rebuttal. It has resolved my major concerns on the quantitative metrics of the proposed W-CLIP. I raised my rating to weak accept.
null
null
null
null
null
null
null
null
Identifying Metric Structures of Deep Latent Variable Models
Accept (poster)
Summary: This paper addresses the problem of learning identifiable representations from a novel perspective, focusing on the distances between representations rather than their coordinates. The authors begin by discussing the challenge of identifiability in latent variable models, emphasizing that maximum likelihood estimation (MLE) alone does not guarantee identifiability. They highlight that training the same model twice can result in different learned representations. To address this, the paper introduces a new notion of identifiability: given two models, A and B, the geodesic distance between latent variables $z_1$ and $z_2$ is considered identifiable if it remains consistent across both models. To validate this idea, the authors train multiple VAEs on different datasets and compare Euclidean and geodesic distances between 100 randomly selected test sample pairs. Their results show that geodesic distances exhibit lower variance, supporting their proposed notion of identifiability. Claims And Evidence: See Weaknesses Methods And Evaluation Criteria: See Weaknesses Theoretical Claims: The theoretical framework, to the best of my ability, checks out. Experimental Designs Or Analyses: See Weaknesses Supplementary Material: Yes. Theorem B4 and Computing the geodesics Relation To Broader Scientific Literature: - I think the paper misses an important section of literature: Disentangled representations + lie groups, namely [5] and all the papers that were built on top of that (e.g. [6-8]) as well as many other equivariant neural networks. Currently, the paper reads in a way that is the first paper that thought about representations in terms of transformations. This claim needs to be toned down quite a bit I would say. As these papers show the group element $g$ $g. x_1 = x_2$ indeed corresponds to the geodesics. - Furthermore, while I understand that the authors do not make a big claim on proposing a new way to compute geodesics. However previous works on computing and analyzing geodesics must be cited: (3.g. [1-4]) Essential References Not Discussed: See Relation To Broader Scientific Literature* [1] Chadebec, Clément, and Stéphanie Allassonnière. "A geometric perspective on variational autoencoders." Advances in Neural Information Processing Systems 35 (2022): 19618-19630. [2] Chen, Nutan, et al. "Fast approximate geodesics for deep generative models." Artificial Neural Networks and Machine Learning–ICANN 2019: Deep Learning: 28th International Conference on Artificial Neural Networks, Munich, Germany, September 17–19, 2019, Proceedings, Part II 28. Springer International Publishing, 2019. [3] Chen, Nutan, et al. "Metrics for deep generative models." International Conference on Artificial Intelligence and Statistics. PMLR, 2018. [4] Arvanitidis, Georgios, Lars Kai Hansen, and Søren Hauberg. "Latent space oddity: on the curvature of deep generative models." arXiv preprint arXiv:1710.11379 (2017). [5] Higgins, Irina, et al. "Towards a definition of disentangled representations." arXiv preprint arXiv:1812.02230 (2018). [6] Zhu, Xinqi, Chang Xu, and Dacheng Tao. "Commutative lie group vae for disentanglement learning." International Conference on Machine Learning. PMLR, 2021. [7] Wang, Tan, et al. "Self-supervised learning disentangled group representation as feature." Advances in Neural Information Processing Systems 34 (2021): 18225-18240. [8] Yang, Tao, et al. "Towards building a group-based unsupervised representation disentanglement framework." arXiv preprint arXiv:2102.10303 (2021). Other Strengths And Weaknesses: **Strengths**: - The paper reads really well. And it offers a good introduction to the identifiability problem - The key contributions of the paper namely Theorem 4.7 is a good theoretical contribution and worth highlighting. - Regardless of the results: the experiment design is very systematic and justified. **Weaknesses** - The theoretical framework, to the best of my ability, checks out. However, I find that the experiments do not strongly validate the hypothesis that geodesic distances are truly identifiable. The main result, particularly Figure 7, is somewhat underwhelming, as there is considerable overlap between the two histograms. While this does suggest that geodesic distances are more identifiable than Euclidean ones, I would argue that this falls short of demonstrating that they are identifiable in a definitive sense. Ofcourse, there are some notable challenges in measuring geodesic distances accurately. Specifically: (1) 1. The optimization procedure used to compute geodesic distances is not optimal, as the authors themselves acknowledge. Euclidean distance in the data space is not an ideal metric for these datasets. A useful addition to the paper would be a toy experiment where the data manifold is known, allowing for precise measurement of the true geodesic distances. - As pointed out in the prior work section, while this is the 1st work to the best of my knowledge that points out the connection between identifiability and transformation, there is a strong link between this work and the literature on lie group latent spaces [5-8]. The authors need to discuss some of these works and tone down the claim that this is the first work that focused on transformations. - While the authors point this out in the first paragraph of Section 7, it remains a strong problem. The claim "We argue that most data is equipped with units of measurement, which greatly simplifies the task of picking a suitable metric in the observation space." does not hold in most vision and nlp tasks for example. Moreover, if we're using Euclidian distance in x-space, we implicitly assume that the dataset is dense, which is not a very reliable assumption. Overall, I think the connection between identifiability and distances is a perfectly valid contribution worthy of publishing and of interest to the representation learning community. The main weaknesses of the paper in my opinion, are: (1) Not discussing the connection between this work and all the disentanglement work that was built on [4] using groups ("transformations") (2) The results sadly are not very promising. Other Comments Or Suggestions: - I want to applaud the authors for writing Section 7. Questions For Authors: - In Figure 1: I'm wondering if you can show the the matrices using the geodesics distances as well instead of Euclidean? How would the matrices for different runs look like? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for thoughtful feedback, as well as their support for acceptance. >However, I find that the experiments do not strongly validate the hypothesis that geodesic distances are truly identifiable. The main result, particularly Figure 7 We emphasize that the main result of the paper is Theorem 4.5. It shows that under any (injective) generative model, the indeterminacy transformations of the true latent space will automatically respect the true geometry of the data. I.e. it proves that any geometrical information extracted from that model is identifiable. We achieve this without placing (notable) restrictions on the model, architecture, or training methods. We motivate our paper and focus our experiments within the point of view of geodesic distances which is an example of geometrical information used in practice. This leads to Theorem 4.7 proving that these are identifiable. >there is considerable overlap between the two histograms. [..] I would argue that this falls short of demonstrating that they are identifiable in a definitive sense We want to clarify that identifiability is a theoretical question and that our present theorems are the definite evidence. The experiments demonstrate that the asymptotic property in Theorem 4.7 is practically achievable in standard models using off-the-shelf methods on finite data. However, we understand the concern about overlapping histograms, but emphasize that such is to be expected and is fully in line with the theory: - Proposition 4.8 shows that a (scaled) Euclidean distance can be identifiable if the model behaves in a Euclidean way in a region. - Riemannian geometry is locally Euclidean, implying that local Euclidean distances can be expected to be robust when points are close. Consequently, neighboring points can have robust Euclidean distances (low CoV), implying overlapping histograms. Finite data further implies uncertainty compared to theoretical treatment and makes the estimation of the manifold stochastic. As the reviewer points out, optimization of geodesics is noisy and may lead to a distorted picture (no efforts were made to counteract this on specific datasets or models). While the focus of our work is theoretical, we acknowledge the reviewers’ requests for additional experiments, and have added results for FMNIST and CIFAR10. See [link](https://tinyurl.com/yxzwks2j) for plots and tables. FMNIST results are similar to previous results, while CIFAR10 shows the clear separation of histograms that you have requested. >there is a strong link between this work and the literature on lie group latent spaces [5-8] We do not claim to be the first to explore transformations of latent space; however, we are the first to apply them in the context of identifiability using Riemannian geometry. While previously used for different purposes, latent space transformations have proven valuable in various areas, including disentangled and equivariant learning, highlighting their mathematical and conceptual connections. The mentioned literature assumes a disentangled latent space where transformations decompose into individual factors of variation and the goal is to find representations that respect this structure (roughly speaking, equivariance of $A_{a,b}$ (Def 4.2)). Instead of enforcing specific properties, our theory analyzes the natural properties of $A_{a,b}$, and we find that just by learning a generative model, $A_{a,b}$ will automatically respect the latent Riemannian geometry (Theorem 4.5). Thus, our theory is valid regardless of whether a disentangled latent space exists in the sense of [5]. >previous works on computing and analyzing geodesics must be cited We already cite [4] and will include [1-3] as appropriate. >The claim "[...] most data is equipped with units of measurement[...]." does not hold in most vision and nlp tasks We acknowledge that there are cases where picking a suitable metric in data space is not trivial, but emphasize that this metric should only be meaningful infinitesimally. E.g. Euclidean distances are generally unsuited for images, but infinitesimally they are perfectly reasonable. Our theory also applies when pulling back ‘perceptual distances’, e.g. using features from pre-trained neural networks, see e.g. [this paper](https://tinyurl.com/3jvx27r3). >using Euclidian distance in x-space, we implicitly assume that the dataset is dense We disagree with this statement as we are not measuring 'isomap-style' geodesics. We measure geodesics along the manifold spanned by the model which does not require dense data to be identifiable. >The main weaknesses of the paper are: (1) Not discussing the connection between this work and all the disentanglement work (2) The results sadly are not very promising. To conclude: 1. see the discussion above, 2. our main results are theoretical, and the experiments are fully consistent with the theory. Our released code provides a path for turning theory into practice. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal. I want to thank the authors for their response. Just to double-check, in Figure 7, if we had measured the geodesic distance exactly, the histogram should fully peak at 0 correct? (in the mnist case at least). I guess still not, given that the 30 models trained with different seeds don't all have exactly the same likelihood? Is there anything else in the theory that breaks here? > We do not claim to be the first to explore transformations of latent space; however, we are the first to apply them in the context of identifiability using Riemannian geometry. While previously used for different purposes, latent space transformations have proven valuable in various areas, including disentangled and equivariant learning, highlighting their mathematical and conceptual connections. The mentioned literature assumes a disentangled latent space where transformations decompose into individual factors of variation and the goal is to find representations that respect this structure This makes sense. I would add a version of this in the final version. Given that I think I underestimated the importance of 4.5, I will increase my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the increased score and follow-up clarifications. >Just to double-check, in Figure 7, if we had measured the geodesic distance exactly, the histogram should fully peak at 0 correct? (in the mnist case at least). I guess still not, given that the 30 models trained with different seeds don't all have exactly the same likelihood? Is there anything else in the theory that breaks here? We share your intuition. Having exact geodesic distances would result in the histogram shifting closer to 0. However, there is still the noise associated with finite data which makes the manifold stochastic and hence we cannot expect no variability at all. > This makes sense. I would add a version of this in the final version. We will update the final version putting more emphasis on how our contributions relate to the literature in the field.
Summary: In this paper, the authors address the challenge of statistical identifiability in deep latent variable models, which are used to extract condensed representations of data. Traditional methods attempt to improve identifiability by imposing constraints like labeled data or limiting model expressivity. Instead, the authors shift the focus from identifying individual latent variables to identifying meaningful relationships between them—such as distances, angles, and volumes. The authors prove that these geometric relationships can be statistically identified under minimal assumptions, without additional labeled data. This result is significant for fields like scientific discovery, where reliable data interpretation is crucial. In the experiments, the authors test their assumption over 2 different datasets, MNIST and Celeb-A. They performed Student's T-test to show that the geodesic distances have much less variance than Euclidean distance. Claims And Evidence: I believe all the claims are well supported. Methods And Evaluation Criteria: I believe there can be more benchmarks included in this paper. Other commonly used image datasets, such as Fashion-MNIST, SVHN, Cifar-10, should also be computationally cheap to run. Theoretical Claims: I believe the theoretical claims are sound. Experimental Designs Or Analyses: I find the claim in the result analysis confusing. For example, 1. Why do the authors only use 3 classes of MNIST? Is this cherry-picked? 2. I don't see why we should expect that the digit class 0, 5, and 7 are naturally close to each other. I hope the authors can explain it in more details. Supplementary Material: I reviewed all sections in supplementary material. Relation To Broader Scientific Literature: A key contribution of the paper is linking identifiability to Riemannian geometry, establishing a novel theoretical framework. This connection allows practitioners to leverage established Riemannian tools (e.g., Riemannian averages, covariances, and principal components) to analyze latent structures in a statistically sound manner. Essential References Not Discussed: I think the literature is reviewed well. Other Strengths And Weaknesses: The goal of this work is well motivated and the paper is well strucutured in general. Other Comments Or Suggestions: I don't have any other comments. Questions For Authors: I have listed my questions in previous sections. Ethical Review Concerns: There is no ethical concern. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's valuable feedback and support for acceptance. >I believe there can be more benchmarks included in this paper. Other commonly used image datasets, such as Fashion-MNIST, SVHN, Cifar-10, should also be computationally cheap to run. While the focus of our work is theoretical (see reply to reviewer ev8z for more details), we acknowledge the reviewers’ requests for additional experiments. In particular, addressing this review we run experiments of FMNIST and CIFAR10. Following the approach in the main paper, we split them into an experiment satisfying the injectivity constraint (CIFAR10) and an experiment that does not satisfy the constraint (FMNIST). The histograms and an updated table are [at this link](https://tinyurl.com/yxzwks2j) and we provide the table here as well. | | MNIST | CELEBA | FMNIST | CIFAR10 | |-------------- |--------|--------|--------|--------| | **t-statistic** | -8.64 | -22.33 | -16.75 | -42.83 | | **p-value** | 1.00 | 1.00 | 1.00 | 1.00 | *Table: One-sided Student's t-test for the variability of geodesic versus Euclidean distances.* The findings are similar to those reported in the submitted paper, which supports the presented theory. We will further extend the paper with an appendix including extra results and details on implementations and model choices. >1. Why do the authors only use 3 classes of MNIST? Is this cherry-picked? As mentioned in the submitted paper, the choice of 3 classes for MNIST was to simplify plotting. We point to the extra experiments above and code in the submitted supplementary material (CelebA) to document absence of any cherry-picking. >2. I don't see why we should expect that the digit class 0, 5, and 7 are naturally close to each other. I hope the authors can explain it in more details. We do not expect any classes to be naturally close to each other. The digits 0,5 and 7 were picked randomly and their placement in Fig. 5 is merely a consequence of optimization of the model fitting. We hope to have addressed your concerns and sincerely thank you again for your review.
Summary: This paper studies the geometry of latent spaces of latent variable models like VAEs, normalizing flows, diffusion models etc. Primarily, the authors highlight that many seemingly simple factors like the latent coordinates or their pairwise euclidean distances etc. of latent variable models are provably not identifiable. This means that under a probablistic setting: despite having different parameterizations- they do not lead to the same distributions. The motivation of the generative model is to discover meaningful intrinsic properties of data that should not be dependent on randomness that is inherent due to training, noise etc. The authors explore this in the context of differential geometry of the latent space of generative models. The main presmise is that unlike the latent coordinates or even euclidean distances between them - geodesic distances computed using a pullback metric from the observation space satisfies identifiability. This paper both rigorously proves this and emperically demonstrates the hypothesis with experiments from MNIST and and CELEBA. Claims And Evidence: Yes. The central point is proved and also demonstrated emperially. However - I do find the experiments lacking generality in the class of generative models investigated. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes Experimental Designs Or Analyses: There is scope to be much more comprehensive in the experiments - for e.g. I would be very glad to see a table similar to Figure 7 for the transcriptomic data example from Figure 1 and that would solidify the main message of the paper across different models and types of data. Supplementary Material: Not thoroughly Relation To Broader Scientific Literature: To the best of my knowledge - this paper discusses an important issue that even though is not unique in all literature - provides a novel comprehensive analysis: theoretically and empirically regarding parameterization invariance of generative models. Essential References Not Discussed: Its fine Other Strengths And Weaknesses: Overall - I think this is a nice paper with a comprehensive conceptual and theoretical treatise on developing parameterization invariant representations. However, the experiments do lack generality and some more convincing demonstrations of the core message would go a long way. Therefore, I am very much on the border leaning slightly positively because of a well-compiled submission and an interesting read. Other Comments Or Suggestions: I am tempted to make the conclusion that generative models when trained well, tend to produce representations that preserve "intrinsic" distances on the data-manifold. For example - If I leave out the decoder 'f' completely and simply use my training dataset with a k-nearest neighbor graph and then compute a Dijkstra-like shortest path distance on this graph - I suspect it would correlate quite well with the construction of the pullback metric and computation of the geodesic distance from (35) which very much depends on the chosen model 'f'. It would be nice to have some experiments where the variability in the geodesic is also visualized somehow (like Fig 5 but with a band instead of just one curve) - especially in comparison to this model-independent geodesic distance. Questions For Authors: - Is Table 1 reporting the values for the geodesic or Euclidean distance? I am unable to parse the message here - Is the familiarity in the trajectories reported in Figure 6 always the case? I was expecting that the trajectory of the Euclidean geodesic to give images with comparatively more "abrupt" changes in each step in comparison to the geodesic which should give a more seamless transformation. It is somewhat visible already - although not strikingly. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful to the reviewer for valuable feedback and favoring acceptance. >There is scope to be much more comprehensive in the experiments - for e.g. I would be very glad to see a table similar to Figure 7 for the transcriptomic data example from Figure 1 and that would solidify the main message of the paper across different models and types of data. While the focus of our work is theoretical (see reply to reviewer ev8z for more details), we acknowledge the reviewers’ requests for additional experiments and will extend the paper with an Appendix including extra results and details on implementations and model choices. In particular, addressing this point we run experiments of FMNIST and CIFAR10. Following the approach in the main paper, we split them into an experiment satisfying the injectivity constraint (CIFAR10) and an experiment that does not satisfy the constraint (FMNIST). The histograms and an updated table are available [at this link](https://tinyurl.com/yxzwks2j). The findings are similar to those reported in the submitted paper, which supports the presented theory. Furthermore, we share the sentiment that the transcriptomic data example is underexplored and plan to add a distance matrix for the geodesic distances to Figure 1. >I am tempted to make the conclusion that generative models when trained well, tend to produce representations that preserve "intrinsic" distances on the data-manifold. Indeed, the main result of the paper, Theorem 4.5, shows that under any well-trained (injective) generative model, the indeterminacy transformations of the true latent space will automatically respect the true geometry of the data. I.e. it proves that any geometrical information extracted from the model is identifiable. >For example - If I leave out the decoder 'f' completely and simply use my training dataset with a k-nearest neighbor graph and then compute a Dijkstra-like shortest path distance on this graph - I suspect it would correlate quite well with the construction of the pullback metric and computation of the geodesic distance from (35) which very much depends on the chosen model 'f'. We appreciate and share your intuition. [This paper](https://arxiv.org/abs/1812.08284) considers geodesics under such an approach. However, we should emphasize that this is heuristic and is not strictly tied to our theoretical results on identifiability. Practically, we expect the approach to work well, though. >It would be nice to have some experiments where the variability in the geodesic is also visualized somehow (like Fig 5 but with a band instead of just one curve) - especially in comparison to this model-independent geodesic distance We agree that different optimisations can lead to different geodesics between the same two points, and acknowledging that a geodesic in itself is not unique. However, in this paper we address the variability of the distance measure across retraining of the models themselves. In our experiments, this means that we compute the geodesic distance between the same two points across 30 different models with 30 different latent spaces. Fig.5 shows just one geodesic in one latent space. Therefore plotting the band would be concerned with a different kind of variability. > Is Table 1 reporting the values for the geodesic or Euclidean distance? I am unable to parse the message here Building on the above, for each pair of points we have 30 measurements of distances according to both Euclidean and geodesic measures. To demonstrate that the geodesic distance is more stable, we compute the coefficient of variation (CV) of both distance measures for each point pair. These are then plotted in Fig. 7 and Table 1 reports the Student’s t-test for the CVs with one sided null hypothesis that geodesics are more stable, i.e. has a smaller CV. The message of Table 1 is that geodesic distances exhibit significantly less variation than Euclidean distances. We acknowledge that the table caption should be improved. We will make these details more explicit and will further add an appendix detailing the experiments. > Is the familiarity in the trajectories reported in Figure 6 always the case? I was expecting that the trajectory of the Euclidean geodesic to give images with comparatively more "abrupt" changes in each step in comparison to the geodesic which should give a more seamless transformation. It is somewhat visible already - although not strikingly. Your expectation is correct. We often see that geodesics provide more ‘smooth’ interpolations while Euclidean interpolations are more ‘abrupt’. This happens because geodesics move with constant speed in data space. This trend is, however, more evident, e.g., in MNIST than in CelebA. We’re happy to include such MNIST examples in the appendix if they are deemed interesting. We hope that our clarifications and additional experiments strengthen your view of the paper, and we thank the reviewer for the thoughtful feedback.
null
null
null
null
null
null
null
null
Physics-Informed Generative Modeling of Wireless Channels
Accept (poster)
Summary: This paper introduces a physics-informed generative model for wireless channel modeling, addressing challenges in data efficiency, generalizability, and physical interpretability. Unlike traditional stochastic models or recent GAN-based methods, the authors propose a Sparse Bayesian Generative Modeling (SBGM) framework that leverages the compressibility of wireless channels to learn distributions of physical channel parameters (e.g., delays, angles of arrival/departure). The results demonstrate that the proposed approach produces physically interpretable channel models that maintain statistical and structural consistency with real-world wireless propagation environments. Claims And Evidence: Good to me. Methods And Evaluation Criteria: The authors evaluate their methods on three different dataset: a modified standardized 3GPP spatial channel model for SIMO for a better illustration; QuaDRiGa for simulations with OFDM; and a further evaluation on the ray tracing database DeepMIMO in Appendix L. It justify the physical parameter distribution accuracy by the comparison of generated vs. real distributions for angular spread, power angular profile, and delay-Doppler characteristics. The channel generation performance is evaluated using cross-validation autoencoder reconstruction error (normalized MSE and cosine similarity). It demonstrates it outperformance by the comparisons with GAN-based baselines (AmbientGAN, GAN-rbf). Overall, these metrics align well with the goals of wireless generative modeling and provide strong empirical validation. Theoretical Claims: Good to me. Experimental Designs Or Analyses: The experimental setup is comprehensive, covering a range of scenarios and datasets to validate the proposed model. The authors conduct experiments on standardized 3GPP spatial channel models, QuaDRiGa simulations, and DeepMIMO ray-tracing datasets, ensuring that their results are applicable to both empirical and simulated wireless environments. The evaluation spans different system configurations, including single-input-multiple-output (SIMO) and orthogonal frequency-division multiplexing (OFDM), demonstrating the flexibility of the approach. A notable aspect of the experimental design is the inclusion of cross-validation using an autoencoder-based reconstruction method, which provides a robust metric for assessing the realism of generated channels. The authors also perform sensitivity analyses, varying the number of training samples and pilot symbols to examine the model’s robustness. One limitation of the experimental design is the lack of ablation studies to isolate the contributions of different components, such as the impact of sparsity constraints or the Kronecker factorization. Additionally, a discussion on training stability and sensitivity to hyperparameters would further improve the evaluation. Supplementary Material: The supplementary material provides extensive details that enhance the reproducibility of the work. It includes mathematical derivations for the Kronecker factorization, details on expectation-maximization updates, and pseudocode for channel realization generation. Relation To Broader Scientific Literature: The paper bridges physics-based wireless models and ML-driven generative methods, ensuring interpretability and generalizability. Unlike GANs, it leverages sparse Bayesian learning and structured priors, aligning with trends in physics-informed ML. Essential References Not Discussed: None Other Strengths And Weaknesses: One of the major strengths of this paper is its ability to generate physically consistent channel models while maintaining computational efficiency. The approach is novel in that it shifts from implicit GAN-based modeling to a prescribed statistical framework that ensures generalizability across system configurations. The integration of structured priors further enhances sample efficiency and interpretability. However, the paper lacks ablation studies that isolate the effects of different components, such as the impact of sparsity constraints, Kronecker factorization, and dictionary-based representations. Additionally, while the method generalizes across system configurations, there is limited discussion on generalization to different propagation environments or frequency bands. Other Comments Or Suggestions: Good to me. Questions For Authors: 1. How well does the model generalize to unseen frequency bands or new environments? 2. How sensitive is SBGM to different levels of noise or missing data? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer BGie for the reviewer's comments. In the following, we address the reviewer's questions and concerns. **To Other Strengths And Weaknesses (2. paragraph):** We agree with the reviewer that further ablations studies regarding various hyperparameters would improve our work's quality. Therefore, we conducted preliminary experiments regarding the number K of CP-GMM components as well as our models' grid resolution. More specifically, in Fig. 2 in the anonymous repository https://github.com/anonymousicml2529/rebuttal, we plot the angular spread for different numbers K of CP-GMM components for the modified 3GPP dataset. For K = 1, the CP-GMM coincides with M-SBL [1]. It can be seen that for K<4, the angular spread is significantly overestimated, which is to be expected as the underlying data is coming from a scenario with power from four different angular regions (cf. Fig. 3 in Appendix H). In Fig. 3 (https://github.com/anonymousicml2529/rebuttal), we investigate different grid resolutions for the QuaDRiGa 5G OFDM dataset. In these simulations, we vary the resolution of the delay-doppler grid with $S^2$ gridpoints from $S=16$ to $S=64$, while keeping the maximally resolvable delay at $8\mu\text{s}$ and the maximally resolvable doppler at $100\text{Hz}$. It can be seen that the grid resolution has only marginal effects on the performance of the autoencoder in a) and b). In c), we plot exemplary samples from the trained CP-GMMs, respectively. **To Questions for Authors (1):** There are various effects influencing the channel when considering a different frequency band. Interestingly, for spatial channels, we can adapt to different frequency bands by considering a new dictionary $\mathbf{D}$, that takes the newly considered frequency band into account. More precisely, the so-called beam squint (cf. [2]) is the major cause for changed spatial channel characteristics when varying frequency bands. The beam squint describes that the spatial steering vectors are frequency-dependent as they depend on the ratio between the antenna spacing and the considered wavelength. However, as the steering vectors constitute the dictionary $\mathbf{D}$ in our model, we can adapt this ratio in a newly used dictionary after training. The remaining frequency-dependent effects regarding, e.g., the path loss are rather slowly changing, which is why we expect our model to be valid for a reasonably broad frequency range. We will consider including experiments regarding this in our work. We would like to mention that our model is supposed to capture the scenario-specific channel characteristics from one concrete environment. Only then, it can generate new samples that are better suited for this environment in comparison to generic standardized channel models (e.g., 3GPP). A generalization for a large variety of environments is what the already existing standardized channel models are designed for. This is why we do **not** want our model to generalize very well to unseen environments, as this would likely lead to a considerable drop in performance for the environment it has been trained for. Of course, there is a certain trade-off that can be optimized. However, we do not think that this is within the scope of our work, and we hope that the reviewer agrees with us on this point. **To Questions for Authors (2):** In our simulations, we added noise to our training samples whose noise variance is drawn such that the corresponding SNR is uniformly distributed between 0 and 20 dB. Moreover, SBGM is specifically designed to deal with noisy and corrupted data and considers the noise level during training. Therefore, SBGM is robust with respect to different noise levels. In our OFDM simulations, we consider solely training samples that contain missing data, and our model performs very well. As a consequence, SBGM also works well for missing data. We hope that this is what the reviewer is referring to and we did not misunderstand the reviewer's question. If not, we would appreciate it if the reviewer could comment again on this point. We hope that we addressed all the reviewer's concerns and would like to thank the reviewer again, as we think that the additional experiments regarding the hyperparameters will improve our work's quality. [1] Wipf et al., ”An Empirical Bayesian Strategy for Solving the Simultaneous Sparse Approximation Problem,” IEEE TSP., vol. 55 [2] B. Wang, M. Jian, F. Gao, G. Y. Li and H. Lin, "Beam Squint and Channel Estimation for Wideband mmWave Massive MIMO-OFDM Systems," in IEEE Transactions on Signal Processing, vol. 67
Summary: The paper presents a new way to model wireless channels by combining basic physical principles with modern generative modeling techniques. Instead of using complex black-box methods like GANs—which require lots of clean data—the authors introduce more structured approaches based on simplified versions of VAEs and GMMs (called CP‑VAE and CP‑GMM). These models learn the key physical characteristics of wireless channels (such as the angles, delays, and Doppler shifts) even when the data is noisy or incomplete. By incorporating known physics rules into the learning process, the method produces realistic and interpretable channel models. Additionally, the approach uses clever algorithmic tricks (like a closed-form update using Kronecker factorization) to reduce computational effort and can adapt to various wireless setups without needing to start over with training. Tests on several datasets show that this new method generates accurate channel representations and outperforms traditional GAN-based models. ## Update After Rebuttal: The authors sufficiently address my review and I vote to keep the score. Claims And Evidence: Most of the paper’s claims are supported by a mix of theoretical derivations and experimental results on simulated datasets. Methods And Evaluation Criteria: ## Methods - **CP-VAE and CP-GMM Models:** - Special versions of common generative models (variational autoencoders and Gaussian mixture models). - Leverage physical properties of wireless channels (like compressibility and regular covariance patterns) to learn key parameters (angles, delays, Doppler shifts) even from noisy or incomplete data. - Designed for common wireless setups such as SIMO and OFDM. - Can adapt to different system configurations without needing to retrain. ## Evaluation and Datasets - **Datasets Used:** - A modified 3GPP model for SIMO channels. - QuaDRiGa for OFDM channels. - The DeepMIMO ray tracing database. - **Evaluation Metrics:** - **Power Angular Profile** - **Angular Spread** - **Normalized Mean Squared Error (NMSE)** - **Cosine Similarity** Overall, these methods and evaluation criteria are well-suited for capturing and validating the real-world behavior of wireless channels. Theoretical Claims: There are no theoretical proofs in the first 9 pages, and I did not read them. Experimental Designs Or Analyses: The paper runs experiments on simulated data. While simulated datasets are standard, the experimental validation does not include real-world measurement data. Testing on real-world channels would further confirm the practical viability of the approach. Details on hyperparameter selection and sensitivity analysis are limited. More thorough reporting on these aspects could help validate the robustness of the methods. Supplementary Material: I was unable to review the supplementary material in detail. Relation To Broader Scientific Literature: The paper builds on ideas from sparse Bayesian learning and compressive sensing (as developed by Tipping (2001) and Wipf & Rao (2004)), adapting them to the wireless channel context. It leverages SBGM, previously explored in works like Böck et al. (2024b), to derive models (CP‑VAE and CP‑GMM) that are both data-efficient and physically interpretable. The use of standard datasets like QuaDRiGa and DeepMIMO situates the work within the broader literature, allowing direct comparison with existing channel models. Essential References Not Discussed: The related work in this paper appears to be appropriate for the paper. Other Strengths And Weaknesses: ## Strengths - **Originality:** - Integrates physical channel properties with sparse Bayesian generative modeling. - Introduces novel prescribed models (CP-VAE and CP-GMM) that reduce the need for high-quality data. - **Significance:** - Addresses key challenges in wireless channel modeling, enhancing interpretability and adaptability. - Demonstrates potential for use across various system configurations without retraining. - **Clarity:** - Overall presentation is clear with well-defined equations and detailed explanations. ## Weaknesses - **Experimental Validation:** - Evaluation is primarily based on simulated datasets; real-world datasets might further strengthen the claims. Other Comments Or Suggestions: None Questions For Authors: 1. **Real-World Data Validation:** Could you clarify whether you have tested your method on real-world channel measurements, and if not, what are your expectations or plans for such validation? *If the authors provide evidence or detailed plans that indicate strong performance on real-world data, it would significantly reinforce the practical impact of the work.* 2. **Hyperparameter Sensitivity and Robustness:** How sensitive is your approach to the choice of hyperparameters (e.g., grid resolution, noise variance estimates) and what guidelines can you offer for setting them in different scenarios? *A detailed sensitivity analysis or robust guidelines could increase confidence in the method’s reproducibility and general applicability.* Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We want to thank reviewer dNNa for the thorough review and the reviewer's assessment. In the following, we address the reviewer's additional questions. **To Experimental Designs Or Analyses (1. paragraph) & Weaknesses & Questions For Authors (1):** We agree with the reviewer that experimental results on real measurement data would strengthen our paper's contribution. We indeed have preliminary results of our models on a real-world measurement campaign that was conducted on the Nokia campus in Stuttgart, Germany, in 2017. In Fig. 4 in the anonymous repository https://github.com/anonymousicml2529/rebuttal, you can see the scenario in which measurements have been conducted. Moreover, the green regions indicate that users experienced a LoS path to the base station, while the red regions have solely NLoS paths. The base station was equipped with a 16x4 antenna array. In our preliminary results, we only consider the first row of the antenna array. In Fig. 5 (https://github.com/anonymousicml2529/rebuttal), we tested the CP-VAE on this data. The data consisted of 16-dimensional noisy channel measurements in the spatial domain. In a), you can see the overall power angular profile of newly generated samples (256-dimensional). In Fig. b), we trained the CP-VAE solely on LoS channels from the "LoS"-street canyon, as well as solely on NLoS channels respectively. It can be seen that the generated channel parameters from the LoS CP-VAE exhibit smaller angular spreads, which is expected. Moreover, in c), you can see exemplary plotted samples validating that the CP-VAE can easily distinguish between the LoS and NLoS channels. We will consider including these results in our work. **To Experimental Designs Or Analyses (2. paragraph) & Questions For Authors (2):** We agree with the reviewer that a further analysis regarding the hyperparameters would improve our work's contribution. In Fig. 2 and 3 in https://github.com/anonymousicml2529/rebuttal, we have preliminary results for these experiments. More precisely, in Fig. 2, we investigate the effect of the CP-GMM's number of components on the angular spread for the modified 3GPP data. For K=1, the CP-GMM coincides with M-SBL [1]. It can be seen that for K<4, the angular spread is significantly overestimated, which is to be expected as the underlying data is coming from a scenario with power from four different angular regions (cf. Fig. 3 in Appendix H). In Fig. 3, we investigate different grid resolutions for the QuaDRiGa 5G OFDM dataset. In these simulations, we vary the resolution of the delay-doppler grid with $S^2$ gridpoints from $S=16$ to $S=64$, while keeping the maximally resolvable delay at $8\mu\text{s}$ and the maximally resolvable doppler at $100\text{Hz}$. It can be seen that the grid resolution has only marginal effects on the performance of the autoencoder in a) and b). In c), we plot exemplary samples from the trained CP-GMMs, respectively. In our final version, we will consider including a more thorough discussion and experimental validation regarding various hyperparameters, such as the number K of CP-GMM components, the grid-resolution, and the maximally resolvable delay and doppler. Moreover, we will consider including some results for the measurement campaign. We would like to thank the reviewer dNNa again as we think that both remarks will improve our work's quality. [1] Wipf et al., ”An Empirical Bayesian Strategy for Solving the Simultaneous Sparse Approximation Problem,” IEEE TSP., vol. 55
Summary: This paper introduces a physics-informed generative modeling approach for wireless channels. By using sparse Bayesian generative modeling and knowledge about the conditional channel moments, this paper addresses limitations in existing methods such as the need for high-quality data, lack of generalizability, and poor physical interpretability. Claims And Evidence: The claims are well supported. Methods And Evaluation Criteria: This paper figures out a way to solve the drawback of sparse Bayesian generative modeling by using the conditional channel moments. Theoretical Claims: Not appliable. Experimental Designs Or Analyses: This paper made experiments for both SIMO and OFDM, and used open datasets to evaluate the proposed method. The experimental designs are sound and the evaluation metrics are well chosen. Supplementary Material: No. Relation To Broader Scientific Literature: The contributions of this paper is useful for designing algorithms for communication and signal processing, but I don't realize the contribution to machine learning. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: The paper provides a data-knowledge dual-driven way to model the wireless channel, which is important for the simulations in communication and signal processing. According to the experiments, the proposed methods present a superior performance over the other methods. Weakness: - The paper seems more suitable for communication/signal processing in my opinion. - It would be better if the paper could compare with more other methods. Other Comments Or Suggestions: Not appliable. Questions For Authors: 1. How do you choose the selection matrix "A"? Do you choose a good one by cross-correction? 2. What if the matrix “A” changes for different training channels? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer 9VKv for the review and the reviewer’s assessment. In the following, we address the concerns and questions raised. **To Relation To Broader Scientific Literature & Other Strengths and Weaknesses (bullet point 1):** We agree that our proposed model can be utilized in the context of communications and signal processing. Having said that, we also strongly believe that our work fits well with the application-driven machine-learning track of ICML. Our model builds a machine learning-based framework to simulate the propagation of electromagnetic waves in indoor and outdoor scenarios. For that, we enhance a machine learning-based framework (sparse Bayesian generative modeling) by incorporating specific insights from geometric optics and electromagnetic waves (i.e., the channel sparsity and the structural knowledge about conditional channel moments). In this sense, we derive a new application-driven machine learning-based tool that is not meant to solve one particular problem from communications or signal processing but can be used generally. We hope that the reviewer agrees with us on this. **To Other Strengths And Weaknesses (bullet point 2):** We now included two more baselines for physical parameter generation: an AmbientGAN that does not output channel realizations $\mathbf{h}$ but their compressible representation $\mathbf{s}$, and M-SBL from [1]. In Fig. 1 in the anonymous repository https://github.com/anonymousicml2529/rebuttal/, we plot preliminary results for the 3GPP dataset. It can be seen that AmbientGAN, as well as M-SBL, significantly overestimate the channel-wise angular spread. We would also like to refer to our response *To Experimental Designs or Analyses (1. paragraph)* to Reviewer 1 (9aFT), where we discuss the new baselines and the results in more detail. **To Questions For Authors (1)):** In our simulations for OFDM, $\mathbf{A}$ is chosen as a selection/masking matrix, i.e., it masks out a certain amount of entries in the corresponding channel. This is in line with the OFDM standard in wireless communications. The indices that correspond to the entries that are masked out are drawn randomly. Moreover, this masking operation is illustrated in, e.g., Fig. 2 e), where we plot the corresponding compressed training samples. **To Questions For Authors (2)):** Changing the observation matrix for different training samples is possible and has also been considered in [2]. Generally, changing the observation matrix either has no impact or even improves the overall performance as the model experiences varying ”subspace information” from the training samples during training. In our work, we only considered fixed measurement matrices as this is arguably more relevant for wireless channels. However, we will consider including simulations regarding varying the observation matrix in the final version. We would like to thank the reviewer 9VKv again for the valuable comments, and we think that the added baselines will improve our work's quality. [1] Wipf et al., ”An Empirical Bayesian Strategy for Solving the Simultaneous Sparse Approximation Problem,” IEEE TSP., vol. 55 [2] Böck et al., ”Sparse Bayesian generative modeling for compressive sensing,” NeurIPS, 24
Summary: The paper proposes a physics‐informed generative modeling framework for wireless channels that integrates physical channel knowledge with sparse Bayesian generative modeling (SBGM). The method consists of several key components: Channel Representation via Compressibility: The authors begin by exploiting the fact that wireless channels are compressible when expressed in a physically interpretable dictionary. In an SIMO channel, the representation is a sparse vector that captures the physical parameters (such as delays and angles). A similar representation is used for OFDM channels by defining a grid in the delay–Doppler domain. Sparse Bayesian Generative Modeling (SBGM): Building on SBGM, the method learns a parameterized distribution for the compressible representation of the channel. Incorporating Physical Structure: The model leverages known physical properties of wireless channels. Because path loss phases are uniformly distributed, channels tend to be zero-mean and, under stationarity assumptions, have structured (e.g., Toeplitz) covariance matrices. By choosing the dictionary appropriately, the conditional covariance naturally inherits a Toeplitz (or block-Toeplitz) structure. This aligns the learned model with established physical channel models. Model Variants – CP-VAE and CP-GMM: Two concrete implementations are proposed: Channel Parameter-VAE (CP-VAE): A variational autoencoder variant that learns the latent distribution and uses neural networks to parameterize $\gamma_\theta (z)$. Channel Parameter-GMM (CP-GMM): A Gaussian mixture model version where the latent variable is discrete and the model learns mixture weights and associated diagonal covariance matrices. Both variants are designed to capture the distribution of the compressible representations, which in turn determine the physical channel parameters. Kronecker Approximation for High-Dimensional Channels: In scenarios such as OFDM where the channel spans multiple domains (e.g., time and frequency), the model incorporates a Kronecker structure to the conditional covariance. This reduces the number of parameters by expressing the covariance as a Kronecker product of two smaller Toeplitz matrices, making the method more scalable. Generalizability and Data Efficiency: A notable aspect of the approach is its ability to learn from compressed and noisy observations (e.g., those collected during standard online operation) rather than requiring extensive high-quality training data. Furthermore, the learned generative model is shown to generalize to different system configurations (such as varying numbers of antennas or subcarrier spacings) by adapting the dictionary without retraining. Claims And Evidence: The evidence for the claims are reasonable. The authors validate their generative model by studying the power angular profile and channel-wise angle spread. They also use the channels generated by their methods and baselines to train autoencoders. The reconstruction error of these autoencoders are then evaluated, and the proposed method performs better. Methods And Evaluation Criteria: The method and evaluation datasets make sense, although there is confusion about what the contributions of this work are in comparison to prior work. The paper is written in an extremely confusing manner. The authors start section 4.1 with sparse modelling of wireless networks (which seems fairly standard and I don't understand what the innovation is here), then move to sparse generative modelling (again, seems like prior work from Wipf and Rao directly transfer), then move to section 4.3 where they talk about how wireless channels don't suffer the same problems suffered by other sparse dictionaries, and the techniques from Bock et al directly transfer with some modifications. There's too much unnecessary text that blurs the actual method proposed by the authors and what their contributions are. The introduction and background material take up almost 4.5 pages, which is far too much IMO. There's also no clear reasoning as to why this algorithm needs fewer training samples than existing work. There are no experiments that study the effect of training size on the proposed algorithm and baselines. Theoretical Claims: The paper does not report theoretical results. Experimental Designs Or Analyses: The benchmark algorithms seem weak. AmbientGAN is work from 2018 and WGAN-rbf seems like a method the authors made up (it does not contain a citation). Surely there must be more recent algorithms for learning wireless channels (I'm not an expert on wireless channel generation, so I cannot provide feedback on appropriate algorithms). Why not compare to the algorithms by Baur et al and Fesl et al from the related works section? Another reasonable evaluation would be on the performance of these models for compressed sensing estimation-- the current work is evaluate on reconstruction of fully observed channels, and I did not find experiments for estimation from compressed measurements. Supplementary Material: I did not. Relation To Broader Scientific Literature: The paper seems to be a significant contribution towards the generative modelling of parameter-based wireless channels. However, there are some concerns regarding the novelty over prior work by Bock et al. My reading of the paper is that most of the techniques for sparse generative modelling transfer to this case, with some modifications for wireless channels. Essential References Not Discussed: Seems reasonable. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: My main concerns are regarding the writing/description of the algorithm. A more detailed description of why your method is not a direction extension over the prior works by Bock et al, is appreciated. There's also no clear reasoning as to why this algorithm needs fewer training samples than existing work. There are no experiments that study the effect of training size on the proposed algorithm and baselines. Additionally, the performance of these models on compressed sensing estimation seems like the most relevant metric for judging the quality of these models. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer 9aFT for the thorough summary and review. In the following, we address the concerns and questions raised. **To Methods and Evaluation Criteria (2. paragraph) & Relation to Broader Scientific Literature & Questions for Authors (1. paragraph):** We agree with the reviewer that Section 4 contains a mix of our contributions and already existing ideas and techniques, such as channel sparsity, sparse Bayesian generative modeling, and knowledge of conditioned channel moments. We decided for this structure since our main contribution lies in recognizing that these topics can be combined to yield a generative model for wireless channels that is very different from existing approaches and overcomes the main concerns in concurrent methods. However, we understand that this might make extracting our contribution difficult. From our perspective, our work has considerable application-driven contributions: It addresses and overcomes the major concerns of state-of-the-art generative models for wireless channels by combining three topics that are non-trivial to combine, i.e., channel sparsity, sparse Bayesian generative modeling (which is originally not meant to generate samples), and prior knowledge of conditioned channel moments. By doing so, we derive a model that can train on corrupted data, is physically interpretable, and requires no retraining for changing system configurations. In addition, our work has a methodological contribution by deriving a new EM algorithm in Appendix F that exploits the conditioned Toeplitz structure. We will rewrite the final version so that our contribution is better separated from previous work. We thank the reviewer for making us aware of this potentially misleading structure. **To Methods and Evaluation Criteria (3. paragraph) & Questions for Authors (2. paragraph):** We would like to refer to Fig. 2 a) and b), where we investigate the effect of the training size. In the considered QuaDRiGa scenario, 100 training samples are sufficient for our method to perform well. **To Experimental Designs or Analyses (1. paragraph):** Generative modeling for wireless channels is rather new, and there are basically no proposed methods that learn from corrupted data. Almost all approaches are based on GAN-variants and assume lots of ground-truth training data. Although the methods in Baur et al. and Fesl et al. can learn from compressed data, they are not derived to generate samples but directly apply to estimating wireless channels from observations. Having said that, we included two more baselines now. In our submission, we compared to AmbientGAN, which can output channels h. However, AmbientGAN can also be trained to output the compressible channel representation s by including the dictionary D between generator and discriminator. This also allows us to compare to AmbientGAN for parameter generation. Moreover, we included M-SBL [1]. M-SBL is an SBL extension that assumes several observations instead of one. Indeed, M-SBL is equivalent to the CSGMM from [2] with only one component. In Fig. 1 in the anonymous repository https://github.com/anonymousicml2529/rebuttal/, we plot preliminary results for the 3GPP dataset. While AmbientGAN for s and M-SBL perform similarly for the power angular profile, both significantly overestimate the channel-wise angular spread. For AmbientGAN, this is due to the lack of sparsity prior knowledge. For M-SBL, it is because it fits a Gaussian to s, which is not expressible enough to capture different directions individually. This is also seen in the plotted samples (c). **To Experimental Designs or Analyses (2. paragraph) & Questions For Authors (3. paragraph):** There might be a misunderstanding regarding our evaluation method. We do **not** evaluate the reconstruction performance of our method in any experiments. An autoencoder reconstructs generated data as a means to evaluate the **generation** performance of our model. Specifically, the autoencoder is first trained on data generated by our model. It then reconstructs ground-truth channels. As our model generates new channels that are supposed to resemble ground truth, we do not think that training the autoencoder on compressed data is a good fit for evaluation. Applying our model to compressed sensing, as in [2], evaluates the performance for reconstruction. However, we consider the generation performance, so we evaluate the quality of newly generated samples. If the reviewer is referring to something different and we misinterpret the statement, we would appreciate it if the reviewer could comment on this point again. We would like to thank the reviewer again as we think that restructuring our work together with the newly added baselines will improve our work's quality. [1] Wipf et al., ”An Empirical Bayesian Strategy for Solving the Simultaneous Sparse Approximation Problem,” IEEE TSP., vol. 55 [2] Böck et al., ”Sparse Bayesian generative modeling for compressive sensing,” NeurIPS, 24
null
null
null
null
null
null
CoDy: Counterfactual Explainers for Dynamic Graphs
Accept (poster)
Summary: This paper introduces CoDy (Counterfactual Explainer for Dynamic Graphs), a method for generating counterfactual explanations to interpret predictions made by Temporal Graph Neural Networks (TGNNs) on continuous-time dynamic graphs (CTDGs). Existing explanation methods focus on static graphs or factual explanations, which are insufficient for capturing temporal dependencies and actionable scenarios in dynamic graphs. To close this gap, CoDy combines Monte Carlo Tree Search (MCTS) with heuristic policies to efficiently explore the search space of past events, identifying minimal subsets of events whose removal alters the model’s prediction. The authors also develop a greedy baseline that iteratively selects events causing the largest immediate prediction change. Experimental results empirically demonstrate the superior performance of CoDy in identifying concise yet impactful counterfactual explanations. ## update after rebuttal Thanks for the authors' efforts in the rebuttal. I intend to keep my rating. Claims And Evidence: The main claims made in this paper are generally supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method and evaluation criteria are well-aligned with the problem of explaining TGNN predictions on dynamic graphs. Theoretical Claims: No theoretical claims were made in this paper. Experimental Designs Or Analyses: The soundness of all the experimental designs and analyses has been carefully checked. Supplementary Material: The appendix of this paper has been carefully reviewed. Relation To Broader Scientific Literature: This paper advances prior works in two key areas: GNN explainability and counterfactual reasoning. Existing GNN explanation methods focus on static graphs, lacking effectiveness to handle temporal dependencies. For dynamic graphs, prior methods are limited to factual explanations, which identify contributing features but fail to explore "what-if" scenarios. CoDy bridges this gap by adapting counterfactual explanation to temporal contexts, leveraging insights from causal reasoning and temporal graph modeling. Essential References Not Discussed: Most of the key related work are cited. Other Strengths And Weaknesses: Strengths: S1. This paper introduces a novel counterfactual explanation framework for temporal graph neural networks on continuous-time dynamic graphs, addressing a clear gap in dynamic graph interpretability. S2. This paper presents a well-defined methodology that combines Monte Carlo Tree Search with spatio-temporal and gradient-based selection policies. S3. The experimental evaluation is comprehensive. The authors test the proposed method CoDy on multiple datasets and compare against both factual and greedy baselines. Weaknesses: W1. The spatio-temporal and gradient-based policies may introduce a large bias, favoring recent or proximal events while neglecting older, causally critical events. W2. While the authors claimed that CoDy is model-agnostic, its reliance on prediction logits may produce inconsistent results for TGNNs with differing architectures, which the paper does not investigate. W3. Discussion on the reason why CoDy outperforms factual methods in achieving better AUFSC- scores is not given. This is a bit counter-intuitive because CoDy is not configured to achieve this. W4. Although the evaluation is extensive, it relies primarily on quantitative metrics. It would be great if the authors could also incorporate user studies or human-centered evaluations to assess the interpretability of the generated counterfactuals. W5. Nit picking: the experiments focus on social/interaction networks. Performance on dynamic graphs with fundamentally different dynamics (e.g., biological networks, sensor networks) remains unverified, raising questions about method universality. Other Comments Or Suggestions: The notation p in Equation (6) can be easily confused with the binary classification function p. Questions For Authors: Please refer to the weaknesses. Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your time and insightful feedback on our paper. Please find our rebuttal to the weaknesses below: **W1: Potential bias in selection policies** The selection policies are indeed designed as heuristics to introduce a bias, guiding the search efficiently within the vast combinatorial space of potential counterfactual subgraphs (past events). This guidance is crucial for tractability. Our experiments explicitly demonstrate the value of this bias. While these policies prioritize recent/proximal/high-impact events, the MCTS framework in CoDy, particularly the exploration component in the selection score (Section 4.3), inherently encourages exploring less-immediately-promising paths. This helps mitigate the risk of exclusively focusing on recent events and allows CoDy to discover counterfactuals involving older or less obviously connected events if they are indeed critical. The search doesn't only consider these events; it prioritizes them while retaining the ability to explore alternatives. **W2: Reliance on prediction logits** We appreciate the reviewer's point regarding the term "model-agnostic." We use this term to indicate that CoDy does not require access to the internal architecture, parameters, or gradients of the TGNN model, unlike gradient-based or model-specific explainers. It interacts with the model solely through its input (perturbed graph history) and output (prediction score). We assume that a TGNN outputs a continuous output signal (like logits or probabilities), which is the case for all TGNNs that we are aware of. In a case of a TGNN that strictly produces binary outputs, there would indeed be little guidance for efficient search, likely necessitating brute-force approaches (however, we do not percieve this a realisitc setup of a TGNN). **W3: CoDy outperforms factual methods on $AUFSC_-$ scores** This is an excellent observation, and we agree it appears counter-intuitive since CoDy optimizes for counterfactuals ($fid_+$). The high $fid_-$/$AUFSC_-$ scores (sufficiency) achieved by CoDy, especially compared to T-GNNExplainer in several settings (Table 1), stem primarily from CoDy's search objective and fallback strategy. CoDy seeks the minimal necessary set of events to change the prediction. When a true counterfactual is not found within the search budget (either because none exists in the constrained search space or the search limit is reached), CoDy's fallback strategy (Section 4.3.5) returns the perturbation set $s_opt$ that caused the largest shift in the prediction logits. This $s_opt$ represents the set of most influential past events identified in the search. Even if not enough to flip the prediction, these events often still capture sufficient factual evidence for the prediction of the model. In fact, our evaluation shows, that this is highly effective in uncovering factual evidence. In contrast, factual methods like T-GNNExplainer aim directly for sufficiency but might identify larger subgraphs or be affected by approximations (as noted in their paper and our Section 5.3.2), potentially leading to lower $fid_-$/$AUFSC_-$ scores in some cases compared to CoDy's focused, impact-maximizing search (even in fallback). **W4: Lack of user studies or human-centered evaluations** We fully agree with the reviewer that user studies are invaluable for assessing the practical interpretability and utility of explanations. However, the primary contribution of this work is methodological: introducing the first counterfactual explanation framework (CoDy) specifically for TGNNs on CTDGs and rigorously evaluating its quantitative performance against existing factual methods and a novel greedy baseline using established metrics ($fid_+$, $fid_-$, sparsity, etc.). Given this focus, we considered extensive human-centered evaluations to be beyond the scope of this initial paper. Acknowledging the importance of user studies, we will explicitly recommend them as a direction for future work in the conclusion section of the revised manuscript. **W5: Experiments focus on social/interaction networks** Our choice of datasets (Wikipedia, UCI-Messages, UCI-Forums) was motivated by their prevalence in TGNN research and their representation of important real-world dynamic systems. While they primarily fall under social/interaction networks, they exhibit considerable diversity: - Unipartite (UCI-Messages) vs. Bipartite (Wikipedia, UCI-Forums). - Presence (Wikipedia) vs. Absence (UCI) of edge features. - Varying temporal dynamics and graph densities (Table 2). We believe these datasets provide a strong and representative initial validation of CoDy's effectiveness. While our datasets focus on interaction networks, the methodology extends to broader CTDG applications, such as financial modeling and dynamic recommendation systems. --- Rebuttal Comment 1.1: Comment: I thank the authors' response, which solved all my previous concerns. I would like to keep my rating.
Summary: The paper introduces CoDy, a method for generating counterfactual explanations for Temporal Graph Neural Networks (TGNNs). Unlike existing methods that focus on factual explanations, CoDy explores how minimal modifications to a dynamic graph can alter predictions (counterfactuals). CoDy employs Monte Carlo Tree Search (MCTS)-based algorithm with heuristic selection policies that leverage temporal, spatial, and local gradient (*why did you call it like this?*) information to efficiently identify counterfactual subgraphs. To benchmark its performance, the authors develop GreeDy - although there are other static counterfactual explainers in the literature - and propose an evaluation framework assessing necessity and sufficiency of explanations. Experiments on three real-world datasets (Wikipedia, UCI-Messages, UCI-Forums) and two TGNN models (TGN, TGAT) show that CoDy outperforms factual and counterfactual baselines. The results highlight that counterfactual explanations offer a distinct interpretability advantage by illustrating what minimal changes would lead to different outcomes, especially for misclassified predictions. CoDy provides a structured approach to exploring decision boundaries in dynamic graphs. ## Update after rebuttal For the AC: I changed my score from weak accept to strong accept. I went back and forth with the reviewers, and they've answered most of my questions and followed my suggestions. There were some disagreements with them, but that shouldn't undermine this paper's acceptance. I'm willing to champion this paper as it's the best in my batch of reviews. Claims And Evidence: The authors claim to be the first to propose a continuous temporal graph counterfactual explainer, *which is fair enough*. To the best of my knowledge only [1] does temporal graph counterfactual explanations on a snapshot-based graph. However, I'm not sure why the authors need to distinguish between the two? The snapshot-based graph methods can be easily transformed into their continous version by just not framing the graph in $\Delta$ time intervals of the same time span. [1] Prenkaj et al. Unifying Evolution, Explanation, and Discernment: A Generative Approach for Dynamic Graph Counterfactuals. KDD'24 The authors also claim to have a $6\times$ improvement in fidelity. However, I can't seem to find where this claim is supported. Table 1 doesn't showcase this, or am I missing something? Methods And Evaluation Criteria: They partially do, however the authors miss a lot of related work and metrics used in the literature to solidify their empirical claims for CoDy. Please see *Experimental Designs and Analyses*. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: I believe the authors didn't explore enough time-graph datasets. I found out that the TUDataset website has a lot of them that are easy to integrate in the authors' codebase and contain continuous time events. Examples of datasets follow: 1. DBLP-Coauthors [1] 2. BTC-Alpha, BTC-OTC [2] 3. Bonanza [3] [1] Benson et al. Simplicial closure and higher-order link prediction. Proceedings of the National Academy of Sciences. 2018 [2] Kumar et al. Rev2: Fraudulent user prediction in rating platforms. In WSDM'18 [3] Derr et al. Balance in signed bipartite networks. In CIKM'19 ---- The authors decide to adapt a single explainer, PGExplainer, from static scenario to a dynamic one. How? Why only PGExplainer? Also are you adapting it for counterfactuality? If not, how are you evaluating fidelity on it? Additionally, there are a lot of other counterfactual - although PGExplainer is factual (see question #6) - explainers. For example, Prenkaj et al. [4] adopted a lot of static counterfactual explainers: BDDS [5], MEG [7], CLEAR [6]. The authors might scrutinize the goodness of these adapted explainers in a DTDG scenario. Nevertheless, to be fair to previous work, I would suggest to follow the same strategy to complete the analyses made in this paper. For now, the authors just compare to 2 explainers. Also, in the literature, researchers compare against a random baseline - and I'm not talking about *GreeDy-rand.* - to check the sanity of their proposed method. [4] Prenkaj et al. Unifying Evolution, Explanation, and Discernment: A Generative Approach for Dynamic Graph Counterfactuals. In KDD'24 [5] Abrate & Bonchi. Counterfactual graphs for explainable classification of brain networks. In KDD'21 [6] Ma et al. Clear: Generative counterfactual explanations on graphs. NeurIPS'22 [7] Numeroso and Bacciu. Meg: Generating molecular counterfactual explanations for deep graph networks. IJCNN'21 --- Recall that fidelity is a measure based on the underlying predictor. See Prado-Romero et al.'s [8] definition (see pag. 23): $$\Psi(x, x') = \chi(x) − \mathbf{1}[\Phi(x') = y_x],$$ where $x$ is the input, $x'$ is the counterfactual produced. $\chi(x)$ gives 1 if $x$ was correctly classified, and 0 otherwise, and $y_x$ is the ground truth label of $x$. A value of 1 entails that both the explainer and predictor are working correctly. 0 and −1 describe something wrong with the explainer or the oracle. However, we cannot attribute the incorrect function to the explainer or the oracle. This is a shortcoming of fidelity since it bases the assessment of the correctness on the ground truth label $y_x$ instead of the prediction $\Phi(x)$. Therefore, the literature [1-8], use also validity (correctness) to evaluate their counterfactual's goodneess. This metric is never used in this paper. Furthermore, the structural distance, measured by Graph Edit Distance (GED) [8], is never used. The used *sparsity* metric just measures the distance on the node features. However, since the authors advocate for spatial distance as well, it's wierd that GED isn't there, which undermines the claims of the search tree producing "close-by" counterfactuals. [8] Prado-Romero et al. A survey on graph counterfactual explanations: definitions, methods, evaluation, and research challenges. CSUR'24. --- The authors claim that their local-gradient CoDy variant is the best-informed counterfactual explainer. Table 1 shows that the choice for CoDy is random. Sometimes, even the GreeDy variant surpasses it. That's why correctness (validity) and GED should be here. **Moreover, the abstract states that the authors have a 6$\times$ improvement in fidelity than SoTA methods. This improvement doesn't appear to emerge from the table!** --- It seems very wierd that the authors try to evaluate both necessity and sufficiency, as does CF$^2$ [9], but never compare to it. Why choose PGExplainer as comparison and not CF$^2$ undermines the authors claims to be the best in *AU FSC-*. This method is specifically developed to minimize both necessity and sufficiency. [9] Tan et al. Learning and evaluating graph neural network explanations based on counterfactual and factual reasoning. In WWW'22 Supplementary Material: I just skimmed through it. I saw something wierd in Fig. 9 that makes you rethink the claim of the "informedness" of CoDy-local-grad. I also believe that the exploration vs. exploitation strategy should be governed by only $alpha \in [0,1]$ since then you can actually measure the impact of each mechanism into the contribution of the overall counterfactual finding process, which would solidify Sec. G's findings. Now, it feels very empirical and non-conclusive, unfortunately. Relation To Broader Scientific Literature: This works opens doors to graph counterfactual explainability in time-graphs although this field has already been partially explored in [1]. I can't seem to find why [1] nor this paper should or can have a broader scientific impact. Both paper's evaluations are in controlled environments that do not show how they could be employed in real-world scenarios, although both authors argue that *"their datasets are real-world*. [1] Prenkaj et al. Unifying Evolution, Explanation, and Discernment: A Generative Approach for Dynamic Graph Counterfactuals. In KDD'24 Essential References Not Discussed: I came across this paper [1] that trates temporal graph counterfactual explainability when considering the time-graph as a sequence of snapshots - i.e., what the authors named DTDGs. I believe it should be treated at least in the related work section. I'm not sure wheather it would fit the authors' comparisons since they discretize the time component and get snapshots of the graph changes. I read Prenkaj et al.'s paper and I believe they explain GNNs on a specific snapshot, particularly the first one, and then use their explainer to generatively classify the incoming graphs and explain them dynamically. This doesn't seem to fit with what CoDy is trying to do; however, I still believe that this comparison between "real" TGNN and snapshot-based counterfactual explainability should be treated in the **Related Work** section. To the best of my knowledge, this is the only work - after some search - in dynamic counterfactual explainability in graphs. Maybe instead of **GreeDy**, the authors could compare against **GRACIE** [1]? *[1] Prenkaj B, Villaizán-Vallelado M, Leemann T, Kasneci G. Unifying Evolution, Explanation, and Discernment: A Generative Approach for Dynamic Graph Counterfactuals. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2024 Aug 25 (pp. 2420-2431).* I believe Prado-Romero et al's bibtex citation should be the following. *@article{prado2024survey, title={A survey on graph counterfactual explanations: definitions, methods, evaluation, and research challenges}, author={Prado-Romero, Mario Alfonso and Prenkaj, Bardh and Stilo, Giovanni and Giannotti, Fosca}, journal={ACM Computing Surveys}, volume={56}, number={7}, pages={1--37}, year={2024}, publisher={ACM New York, NY} }* I had complaints in other venues from reviewers that the paper isn't only an arXiv- Rather it is an ACM CSUR publication. Just to give the authors a heads-up. --- The first paragraph in the related work section misses a lot of static counterfactual explainers (e.g., [2-8] among others). [2] Abrate & Bonchi. Counterfactual graphs for explainable classification of brain networks. KDD'21 [3] Ma et al. Clear: Generative counterfactual explanations on graphs. NeurIPS'22 [4] Numeroso & Bacciu. Meg: Generating molecular counterfactual explanations for deep graph networks. IJCNN'21 [5] Prado-Romero et al. Robust stochastic graph generator for counterfactual explanations. AAAI'24 [6] Lucic et al. Cf-gnnexplainer: Counterfactual explanations for graph neural networks. AISTATS'22 [7] Bajaj et al. Robust counterfactual explanations on graph neural networks. NeurIPS'21 (*factual-based explainer just like PGExplainer*) [8] Chen et al. D4explainer: In-distribution explanations of graph neural network via discrete denoising diffusion. NeurIPS'23 [9] Faber et al. Contrastive Graph Neural Network Explanation. ICLM'20 Workshop (*this guy always produces correct counterfactuals, but it's bound by the dataset: i.e., can't generate something ex-novo*) Other Strengths And Weaknesses: See other sections, please; the reviewer form is already cluttered with a lot of sections. The paper raises a lot of questions, specifically in the experimental section, and the lack of prudent evaluation with the plethora of static graph counterfactual explanations, and the usage of adequate metrics to measure the goodness of the proposed method. While the authors want to showcase *fidelity-* and *fidelity+* as CF$^2$ [1] does, it's a puzzle why they refrain from comparing CoDy to an adapted version of CF$^2$ to dynamic graphs... [1] Tan et al. Learning and evaluating graph neural network explanations based on counterfactual and factual reasoning. In WWW'22 Other Comments Or Suggestions: No. Questions For Authors: 1. So is CoDy only able to perform edge addition/removal operations to generate the counterfactuals? There's no node addition/removals right? 2. Why are you calling your selection strategy *local-gradient*? It's just a logit-based difference... Are you saying that you have access to the gradients of the predictor? I don't think so; otherwise, you're "blowing-up your cover" for treating black-box TGNNs; rather they should be called grey-boxes. I'd argue that you change this nomenclature to something more suitable. 3. Instead of having the heuristic guide your tree traversal, have you thought of learning the path that guides you toward the counterfactual? If so, how would you do that? If not, why didn't you? CoDy's tree algorithm "screams" reinforcement learning to me: did you consider doing this? My question would be, what's the motivation behind Monte-Carlo Search and not other strategies? 4. In Eq. (7) why is the exploration and exploitation governed by $\alpha$ and $\beta$, and not only $\alpha$? In other words, why isn't the equation equal to $selscore(n_k) = \alpha\cdot score_k + (1-\alpha)\cdot score_{explore}(n_k)$? 5. When you have multiple counterfactual candidates, you state that you select the one with the largest shift in the TGNN prediction. Doesn't this, in some sense, suggest that you're overshooting the decision boundary of your predictor? See Wachter et al's distance requirement [1]. Or is this distance tackled during the search algorithm? 6. The authors use PGExplainer as a baseline for static graphs and adapt it to dynamic ones. I wonder why the authors chose a factual explainer in a counterfactual fashion? This doesn't make sense. How are you evaluating fidelity on a factual explainer? How are the authors generating counterfactuals with a factual explainer? Are you simply removing the factual explanation generated from the input graph to hopefully produce a valid counterfactual? 7. Since fidelity has been shown to have inherent weaknesses as a metric (see [2]), why don't you use correctness (validity) to assess the goodness of the produced counterfactuals? 8. Can you please show the Graph Edit Distance (GED) [2] of your counterfactuals? This is a necessary metric to support your claims of the search tree producing "close-by" counterfactuals. 9. Where is your claim of $6\times$ improvement in fidelity supported? I can't see it from Table 1. 10. I don't understand what correct and wrong predictions are in Table 1. Can you please clarify? I can't seem to find anything related to it in the experimental section. 11. In Fig. 9, it is concerning that Cody-Rand. has >0.35 Jacard similarity with the other "more informed" variants, especially with "local-grad." How do you explain this? Is the local-grad. variant that informed? Recall that is just the difference between logits in the original prediction and the counterfactual one. [1] Wachter S, Mittelstadt B, Russell C. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech.. 2017;31:841. [2] Prado-Romero MA, Prenkaj B, Stilo G, Giannotti F. A survey on graph counterfactual explanations: definitions, methods, evaluation, and research challenges. ACM Computing Surveys. 2024 Apr 9;56(7):1-37. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your comprehensive review and valuable feedback. **1.** CoDy strictly removes events from the input graph reflecting the causal relationship between those events and the prediction; i.e., if events $\\{e_i,...\\}$ would not have happened, the prediction would be different. The counterfactuals thus depend on the identified events, including edge and node additions/removals (note that the evaluated TGNNs only cover additions) **2.** We use "local-gradient" to reflect the impact that removing a single event has on the prediction logits. The term "local gradient" reflects this in the sense of the influence that an infinitesimal change to the input (removal of event) has on the output. We agree that the term could be misconstrued as implying access to model internals and are thus renaming it to “local-event-impact”. **3.** Our search algorithm balances exploration and exploitation, dynamically adapting the search based on previously explored trajectories, similar to Monte-Carlo Tree Search. This flexibility is especially important in a vast combinatorial space as in CTDGs. We have indeed considered learning an MLP to inform a selection strategy, however, our initial experimentation with this gave a strong indication that such a surrogate fails to perform sufficiently to provide a good selection indication in this setting. **4.** We have followed your suggestion and simplified the formulation. **5.** Our goal in selecting the candidate with the largest shift is to identify the minimal set of features whose removal has the most decisive impact on the prediction. In many counterfactual explanation methods (e.g., Wachter et al.), the distance from the decision boundary is a primary metric, and our approach follows a similar principle by prioritizing the “critical” events. To ensure a balanced selection, we combine this approach with a sparsity criterion, avoiding unnecessary complexity while maintaining interpretability. **6.** Our procedure for PGExplainer was to generate factual explanations - i.e., the subgraph that supports the prediction - and then construct a counterfactual by removing this subgraph from the input graph. Although this is a heuristic adaptation, it provides a baseline for comparing fidelity. Adapting most static CF methods (especially generative ones like CLEAR or optimization-based like MEG) to the CTDG setting requires significant methodological changes (handling event sequences, temporal dynamics, defining valid edits in time) and engineering effort, making it a substantial research project in itself. Our focus was on establishing a baseline within the CTDG CF setting, which motivated the development of GreeDy. We believe GreeDy serves as a more direct and fair baseline for CoDy's search strategy within the same problem formulation. On a similar note, CF² requires applying continuous masks on nodes/edges which is difficult to translate to the event-based nature of TGNNs like TGAT and TGN. **7 + 10.** Fidelity definitions vary across works. We follow GraphFramEx’s model-focused fidelity definition (Amara et al. 2022) and clarify this in Appendix D. We evaluate instances in which the TGNN makes correct predictions separately from instances where it misclassifies future links. We state this in 5.1 under “Explained Instances”. Given your feedback, we have added further clarification on this and the fidelity metric. **8.** We appreciate the suggestion to use GED. However, CoDy only performs event removal. In this specific setting, the number of removed events (captured by the numerator in sparsity) is the graph edit distance from the original graph's event set to the perturbed graph's event set (where edits are event deletions). Since CoDy operates solely via event removal, sparsity naturally captures structural distance, making GED an unnecessary addition in this context. **9.** Thank you for highlighting this. The 6.11x improvement in fidelity refers to average improvement of CoDy-spatio-temporal over PGExplainer shown in the table in Appendix F. The number is an artefact of a previous revision and we will replace it with an insight from Table 1 and add an explanation. **11.** We believe the Jaccard similarities are a natural consequence of the fact that, regardless of the selection policy, all variants start with the same underlying search tree and balance exploration & exploitation, steering the search along promising paths. When the key causal events are strongly influential, even a random policy may discover similar substructures. For the local-gradient variant, although its ranking metric is based on the logit difference, the fact that its explanations significantly overlap with those from the spatio-temporal variant reinforces that the same vital events are consistently identified. We thank the reviewer for highlighting other related works. We have added GRACIE to related work. While addressing a different problem, it is an important addition for completeness. --- Rebuttal Comment 1.1: Comment: Q1: Nice, thank you for the clarification. Q2: I agree that the new nomenclature makes more sense. Q3: I'd be very interested if you can extend your Monte-Carlo approach to an RL-based approach in the future. I believe that the exploration and exploitation of an RL can be easily adapted to your case. Q4 (**+ additional questions**): Oh, cool! How is now the contirbution of each of the two components? You definitely can measure it now, right? What $\alpha$ did you choose here? Is it the old $\alpha$? If so, how do your empirical insights (experiments and other stuff) change? Do you maintain the same performance? Q5: I'm a bit confused. I read your answer 5 times now, and can't seem to wrap my head around it. Sorry. As a follow-up for me to understand, you said that you use sparsity as a metric to choose the counterfactual that has the lowest impact in the predictor's outcome. How do you measure this impact? And, is sparsity defined as in [1] (pag. 22)? Q6: Ok, my intuition was correct about PGExplainer, pheww :-). Out of curiosity how would you proceed to "port" CF$^2$ in your scenario? Q7 (**fidelity doubts again**): I'm aware that Amara et al. 2022 got already published (in a NeurIPS workshop), however their claims completely ignore citing Tan et al.'s [3] (WWW'22 held 6 months before NeurIPS'22) paper that introduce $fid_+$ and $fid_-$ as scores. *I believe citing the original contribution aka CF$^2$ is a good compromise here*. In my opinion, Yuan et al.'s [4] survey that Amara refers to is completely done for factual explanations and doesn't have anything to do with counterfactuality. In fact, it never talks about fidelity in the entire survey since it doesn't make sense to measure fidelity in factual scenarios. Therefore, you refering to Amara's paper which in turns refers to Yuan's factual explainability survey - where again fidelity is never mentioned - doesn't make sense. My suggestion would be to cite [1,2,3] (one of them). Q10: Can you please let me know what you changed exactly? Q8 (**sparsity confusion, again!**): What are you refering to sparsity here? Graph sparsity? It's a bit confusing since sparsity is a metric defined on the changes in the node features of a graph (see Q5). Q9: Cool! Q11: You mentioned a random perturbation strategy might have a similar effect on the Jaccard similarity. In light of this, can you try a random strategy (e.g., iRand [5] or any other for that matter) in your scenario and report the Jaccard similarity, please? I'm curious to see what the effect of the selection strategy is in reality. [1] Prado-Romero et al. A survey on graph counterfactual explanations: definitions, methods, evaluation, and research challenges. ACM Computing Surveys. 2024 Apr 9;56(7):1-37. [2] Guidotti R. Counterfactual explanations and how to find them: literature review and benchmarking. Data Mining and Knowledge Discovery. 2024 Sep;38(5):2770-824. [3] Tan et al. Learning and evaluating graph neural network explanations based on counterfactual and factual reasoning. WWW'22 [4] Yuan et al. Explainability in graph neural networks: A taxonomic survey. TPAMI 2022 Sep 5;45(5):5782-99. [5] Prado-Romero et al. Are Generative-Based Graph Counterfactual Explainers Worth It?. InJoint European Conference on Machine Learning and Knowledge Discovery in Databases 2023 Sep 18 (pp. 152-170). Cham: Springer Nature Switzerland. --- Reply to Comment 1.1.1: Comment: Thank you for your continued engagement and follow-up questions. Q4: We changed the formula as suggested. To maintain consistency with our original experiments, we set the new $\alpha=\frac{2}{3}$, preserving the original 2:1 weighting between exploration and exploitation terms ($\alpha=2$, $\beta=1$). Because this change maintains the same balance, **all empirical results, metrics, and insights remain unchanged.** The contribution of the two terms cannot directly be measured, as the terms interact throughout the search. However, the sensitivity analysis presented in Appendix G provides insights into the impact of this balance, demonstrating the impact of adjusting this balance. Q5: **1. Sparsity:** We define the sparsity of an explanation as $sparsity=\\frac{|\\mathcal{X}\_{\\varepsilon_i}|}{C(\\mathcal{G},\\varepsilon_i,k,m_{max})|}=\\frac{Number\\ of\\ events\\ in\\ explanation}{Number\\ of\\ candidate\\ events}$ (see Appendix D), reflecting the sparsity of event selection. In other words, mapping to the definition in [1], we define $x=C(\\mathcal{G},\\varepsilon_i, k, m_{max})$ (set of candidate events considered for explanation) and $x'=C(\\mathcal{G},\\varepsilon_i,k,m_{max})\\setminus\\mathcal{X}\_{\\varepsilon_i}$ (candidate events without events identified as part of explanation). $D_{inst}(x, x')$ then captures the number of events in the explanation ($|\\mathcal{X}\_{\\varepsilon_i}|$). We choose the number of candidate events as denominator as it represents the maximum perturbation size within the search space, providing a consistent scale for comparison across instances. Using the entire graph history would result in extremely small and less informative sparsity values. **2. Explanation Selection:** For selecting a counterfactual example from multiple identified examples, we first filter the identified samples to keep only those with the minimum size (minimum $|\\mathcal{X}\_{\\varepsilon_i}|$), directly minimizing the number of changes (event removals) aligning with the "closest possible world" in the words of Wachter et al.. If and only if there remain multiple counterfactuals with the same minimal size, we then use the magnitude of the prediction shift (largest $\\Delta(p_{orig},p_j)$, Eq. 6) as tie-breaker, selecting the most decisive explanation among equally minimal options. Q8: Adding to Q5: In short, for our case: $GED=|\mathcal{X}\_{\\varepsilon_i}|$, which is the numerator in our sparsity definition. Thus, the sparsity captures the structural distance ($GED$), normalized by the size of the search space ($|C(...)|$). Q6: In theory, the main hurdle in porting CF² lies in implementing a continuous edge (and feature) masking mechanism into the explained TGNNs. In a TGN(-attn) model this could be done by integrating such masks into the temporal attention layer. Importantly, one would have to account for correctly handling memory updates. Practically, computational cost are a big hurdle. CF² requires fitting the explanation model (the masks) for each explained instance. Given that inference in complex TGNNs like TGN is highly computationally expensive (as shown by runtime analysis in Appendix F), iterating through a training/optimization process involving repeated TGNN calls for every single explanation would be prohibitively slow for evaluation and practical application on large dynamic graphs. Q7: Thank you for your diligence regarding the citation history for fidelity. We agree that acknowledging the foundational work is essential and will cite the original contribution (CF²). While fidelity concepts are broadly used in GNN explaination literature (e.g., Yuan et al. [4], Sec. 7.2.1), the specific formulation and application to counterfactual necessity and sufficiency are clearly articulated in CF². Q10: 1. We replace "wrong" with "incorrect". 2. In *5.1 Explained Instances* we add: "we evaluate instances where the TGNN makes correct predictions separately from instances where it makes incorrect predictions." 3. We clarify *5.2 Fidelity* in line with Q7. 4. We add more detailed descriptions to the formulas in Appendix D. Q11: To clarify: CoDy-random is not a purely random strategy. It utilizes the complete search framework (score based selection, simulation, expansion, backpropagation). The "random" aspect applies mainly to guiding the exploration of unvisited nodes. This inherent guidance explains why CoDy-random finds explanations overlapping with those of more informed policies, particularly if certain events are highly influential. For completeness, we've run an experiment with iRand, obtaining the following Jaccard Similarity scores compared to CoDy (same settings as Appendix F.3): - CoDy-random: 0.08 - temporal: 0.08 - spatio-temporal: 0.09 - local-event-impact: 0.09 These low scores (<0.1 vs >0.35 among CoDy variants in Appendix F.3) confirm that a purely random strategy differs significantly, highlighting the effectiveness of CoDy even with a random selection policy.
Summary: This paper proposes CoDy and GreeDy, two algorithms that can generate counterfactual explanations for continuous-time dynamic graphs (CTDGs). The main component of the algorithm is a Monte-Carlo-Tree Search to find good edge combinations for removal. The method is experimentally evaluated on three dynamic graphs. Claims And Evidence: The main claim of this paper is the method CoDy. While some evidence for the method is there, I think it could be made more compelling (see below). Methods And Evaluation Criteria: I think the metrics are okay, but I wonder how the sparsity of the explanation can be selected with CoDy/GreeDy. It seems the evaluation AUFSC was done with different sparsity levels. I there a hyperparameter to select this or did the authors just rank the explanations? I am not sure whether this metric would be valid then, because if a method assigns non-sparse but always correct explanations it should get a high score, vs. a method that tries to optimize sparsity, but is not always correct. I think it would make more sense to compute this curve for a hyperparameter that trades off sparsity vs. fidelity / correctness. There are three datasets used, which is okay but also not a very extensive benchmarking. Theoretical Claims: There are no theoretical claims in this work. Experimental Designs Or Analyses: The claimed superiority over the baselines is studied in experiments with several datasets and baselines. I could not find any substantial flaws in the setup. However, I think the results are mixed so I am not entirely enthusiastic. First, the generated explanations do not seem to be very reliable. As far as I understand, e.g., the char score is supposed to be between 0 and 1. On average, the best-performing methods presented in this work show scores between 0.4 and 0.5., which does not seem very good. GreeDy further shows very good performance on TGAT in particular, so there is no clear suggestion on which method to use. The spatio-temporal method has the best performance, but also makes the most calls to the model (Table 5, Appendix). Ablations of the different components: The method has several steps, and I am unsure how systematic the exploration of these is. So far, I only see an ablation for the selection policy. There is some improvement over random, but it is small. Further, the results from GreeDy show that greedy search is also quite good. Given the 10-fold increase in runtime (Appendix F.1), I am not convinced that the full method is worth the additional investment in practice. Generally speaking, the experimental design is valid, but the experimentation could be more extensive, and have better insights (it is basically a huge results table, where some methods will on some datasets, some on otheres, but I cannot see a clear pattern or learning from it). Supplementary Material: Skimmed supplementary material, looked at TGAT results, runtime. Relation To Broader Scientific Literature: This work contributes a new method for *continuous-time dynamic graph counterfactual explainability*. I agree that there are not many methods for this graph counterfactual of problem and therefore the paper does make a contribution. However, as the many specifiers to the domain suggest, I think the problem is relatively niche and only relevant to a very specific subgroup of the ICML community. Essential References Not Discussed: I am not aware of highly relevant missing references. However, I think the connection between discrete time and continuous time graphs could be more deeply discussed. There are simple processing methods to convert discrete into continuous time graphs and vice-versa. However, for discrete time graphs, there are already explanation methods, like Prenkaj et al. (2024). I wonder wheter applying these methods on a discretized version of the graph would not already solve the problem. I think this connection necessitates further consideration. Prenkaj, Bardh, et al. "Unifying Evolution, Explanation, and Discernment: A Generative Approach for Dynamic Graph Counterfactuals." Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024. Other Strengths And Weaknesses: **Summary.** I find it hard to form an opinion about this paper. On the one hand there are no obvious methological flaws. The writing is good. However, I am not very enthusiastic about the experimental results, which don't seem to tell a clear story. Also the problem feels very specific. Overall, I award this paper a borderline rating. Other Comments Or Suggestions: Minor * i.e., (comma missing, l.124, right) * Equation 2 does not seem to make much sense: What does the vector notation mean? It seems we have a vector where one element (upper) is a graph and the other one (lower) is an integer Questions For Authors: Can you please explain how the sparsity of the explanations is controlled and how the AUFSC is computed? Also, what sparsity levels are the fid+, fid- scores computed at? How do you explain the difference in performance between the algorithm versions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for the detailed and insightful review. **Runtime vs. Performance Trade-off** The noted potential 10-fold increase in model calls for CoDy is largely a configurable design choice, not an inherent limitation of CoDy. CoDy, in our experiments, continues searching after finding the first counterfactual (CF) to identify potentially sparser explanations. GreeDy terminates immediately upon finding a CF or reaching a local optimum. As shown in Appendix F.2, CoDy typically finds the first counterfactual explanation relatively quickly, often within a number of iterations comparable to GreeDy's total calls. If configured to stop after finding the first CF (maintaining fidelity while deprioritizing minimal sparsity), CoDy's runtime becomes much closer to GreeDy's, while still being more robust. Thus, CoDy’s runtime is highly adjustable, allowing users to trade off speed for sparsity and more interpretable explanations as needed. We will clarify this in the paper. **Sparsity Control** Neither CoDy nor GreeDy take sparsity as an input; they identify the minimal necessary event set to flip the prediction. Sparsity is an outcome of the search, not a hyperparameter. T-GNNExplainer also determines size automatically; only PGExplainer is configured with a fixed sparsity of 0.2. The fidelity values $fid_+$ and $fid_-$ are computed aggregately over all explanations found for the test instances, representing the overall success rate for the method across its generated explanations. To calculate the AUFSC, we integrate over the cumulative fidelity achieved at sparsity levels (the fraction of instances for which a method found a counterfactual explanation with $sparsity \leq s$, at sparsity level s). This is a standard way to evaluate methods producing variable-size outputs, directly comparing their ability to find concise and necessary explanations. **Connection to Discrete-Time Methods** While CTDGs can be discretized, this process inherently loses temporal precision, as multiple events are grouped within fixed intervals, erasing exact timing and ordering information. TGNNs designed for CTDGs (e.g., TGN, TGAT) explicitly leverage this fine-grained information. Applying a DTDG explanation method to a discretized CTDG explains a different, approximated system and model, not the original CTDG and TGNN. Methods like Prenkaj et al. (2024) are designed for such models that have different assumptions about graph evolution. CoDy directly explains higher-fidelity CTDG models, ensuring explanations faithful to the underlying temporal structure. **Performance** We agree that char scores around 0.5-0.6 are not perfect, but they represent a significant achievement for this novel and challenging task. As the harmonic mean of $fid_+$​ and $fid_−$, $char$ is sensitive to the lower of the two. Finding minimal necessary counterfactuals (high $fid_+$) on complex temporal graphs is inherently difficult, especially for correctly predicted instances where substantial evidence might support the original prediction (Section 5.3.1). Achieving these scores demonstrates a meaningful ability to identify crucial subgraph structures. We appreciate the reviewer noting GreeDy's strong performance. We introduce GreeDy precisely as a competitive baseline. While GreeDy is effective when its greedy path aligns well with the optimal solution, CoDy's approach offers greater robustness against local optima. Appendix F.2 illustrates this: CoDy's performance continues to improve with more iterations, eventually surpassing GreeDy even in challenging settings. We respectfully disagree that there are no clear patterns. The results consistently show: (i) Counterfactual methods (CoDy/GreeDy) significantly outperform factual baselines on $fid_+$, as expected. (ii) Spatio-temporal and Local-gradient selection policies substantially outperform Random and often Temporal policies (Section 5.3.4). (iii) There's a notable difference in explainability between correct and wrong predictions (Section 5.3.1). To make the results easier to interpret, we will reorganize the results tables and group evaluation together based on if the predictions were correct or wrong, showing a clearer picture of the results. **Relevance & Significance** Regarding the perceived specificity of our topic, we emphasize that CTDGs offer high-fidelity representations for ubiquitous evolving systems like financial networks, social media, and recommendation platforms, where powerful TGNNs are increasingly applied. As these TGNNs operate in high-stakes domains, understanding their 'black-box' decisions through interpretable methods is critical for trust, regulatory compliance, and debugging; counterfactual explanations provide particularly actionable insights in these contexts. Therefore, CoDy addresses the crucial and growing need for trustworthy TGNNs on high-fidelity CTDGs, bridging a significant gap for deploying these models responsibly in real-world applications. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses. I better understand the sparsity/fidelity scores now. Regarding the experimental results, I see that there are some base finding, i.e., such that the proposed methods outperform random selection. But these are not the most insightful to me. I am particularly interested in comparing CoDy vs. GreeDy, also to give practitioners a guideline on which to chose and with which policy. Overall, I think the presentation of the results should be improved. While the rebuttal has improved my understanding, I think my main points are still valid and I maintain my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the continued involvement and feedback. We'd like to share how we are revising the paper for a camera-ready version to give a clearer presentation of the results. **1. Enhancing presentation of results** We reorganize the results in Table 1 and group them by explanations for correct predictions (+) and incorrect predictions (-) to enhance clarity. To exemplify this improvement, see the reorganized section on the $char$ metric, showing a clearer pattern: | | uci-m (+) | uci-f (+) | wiki (+) | uci-m (-) | uci-f (-) | wiki (-) | | --- | --- | --- | --- | --- | --- | --- | | GreeDy-rand | 0.04 | 0.08 | 0.09 | 0.14 | 0.12 | 0.19 | | GreeDy-temp | 0.22 | 0.50 | 0.17 | 0.49 | 0.47 | 0.45 | | GreeDy-spa-temp | *0.31* | *0.53* | 0.23 | 0.54 | 0.46 | 0.54 | | GreeDy-gradient | 0.18 | 0.39 | 0.14 | 0.51 | 0.42 | 0.43 | | CoDy-rand | 0.19| 0.43 | 0.24 | 0.52 | 0.47 | 0.58 | | CoDy-temp | 0.23 | 0.49 | 0.22 | 0.55 | *0.54* | 0.62 | | CoDy-spa-temp | **0.31** | **0.54** | **0.30** | *0.57* | 0.52 | *0.65* | | CoDy-gradient | 0.27 | 0.50 | *0.27* | **0.58** | **0.57** | **0.68** | Additionally, we are revising the text accompanying the results to go beyond just reporting numbers by adding interpretation as to why certain methods/policies excel in specific scenarios (e.g., linking CoDy-local-gradient's success on wrong predictions to its ability to exploit model sensitivities when it errs; linking CoDy-spatio-temporal's success on correct predictions to leveraging structural context) **2. Directly comparing GreeDy and CoDy** We explicitly acknowledge that while GreeDy performs well in some settings (validating it as a strong baseline), CoDy's strength lies in its robustness and adaptability. GreeDy's success often depends on the initial greedy path aligning well with the optimal solution, which isn't guaranteed. CoDy's search framework allows it to explore more broadly and backtrack, making it less susceptible to these pitfalls, leading to better or more reliable explanations, particularly as evidenced by its stronger performance with more iterations (Appendix F.2) and across different policies. The revised presentation will make this trade-off (GreeDy's speed vs. CoDy's robustness) much clearer. **3. Add paragraph on practical implications** We add the following paragraph on practical implications: In practice, CoDy presents the most robust and performant choice. While potentially requiring more computation if searching for minimal sparsity, its performance is more reliable across diverse scenarios. By choosing to terminate the search upon finding the first valid counterfactual, a practical implementation can significantly reduce runtime while retaining CoDy's robust search capabilities. On the other hand, if computational efficiency and rapid explanation generation are paramount, GreeDy presents a compelling, faster alternative. It can yield good results quickly, particularly if its initial greedy choices happen to align with an effective counterfactual path. However, this efficiency comes with a higher risk of suboptimal explanations and the potential of ending-up in local optima. The spatio-temporal selection strategy is best fit as selection strategy in real-world applications where ground-truth labels are not available. It performs best on correct predictions, which are most common in properly functioning TGNNs, and only slightly worse than the local-gradient policy in explaining incorrect predictions.
null
null
null
null
null
null
null
null
Synthesizing Privacy-Preserving Text Data via Finetuning *without* Finetuning Billion-Scale LLMs
Accept (poster)
Summary: This work proposes a synthetic data generation method with a differential privacy guarantee. It leverages a lightweight data generator and a topic model for topic clustering. This method tackles the limitation of the existing DP fine-tuning method, which is expensive and relies on large LLM, and the prompt-based methods, which heavily rely on manual prompts. The extensive experiments on both generative and classification tasks show the advantage of the proposed method. Claims And Evidence: Most claims in the work are properly justified. There are some minor points to be clarified. > "Third, PE-based methods need to balance between data quality and synthetic data size (see discussions in §2), while our framework naturally allows for unlimited data samples using the DP finetuned generator, without additional privacy costs during generation." This sentence mentions the limitation of data quality of PE-based methods, but it needs clarification on the quality guarantee of the proposed method. Methods And Evaluation Criteria: **Method** The whole method is reasonable and indeed provides DP guarantees. I have some confusions to be clarified. 1. The privacy budget. While the authors mention in Appendix A, I think a more detailed privacy analysis should be provided. Since CTCL involves two DP processes, the histogram mechanism and DP fine-tuning, and the two processes are sequential, it is crucial to discuss the budget allocation for the two processes. I am also curious if the allocation will influence the utility. 2. The choice of generator and topic model From my perspective, the generator determines the quality of the synthetic data, and the accuracy of the topic model also influences the construction of the histogram. Could the authors provide some intuitions on the choice of the architecture and size of the generator? I guess a larger size of generator can improve the quality but introduce more computation costs for fine-tuning, so the balance is crucial. I am also a little worried if a 140M model can learn the 430M dataset well. Also, how the number of clusters for the topic model is determined, as it is a universal model that needs to accommodate unseen texts? **Evaluation** The evaluation is proper. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental designs are proper and comprehensive. The analyses are well supported. Similar to my concern in Method, could the author provide more details on the budget allocation of the proposed method in experiments? And what is the influence of the budget allocation? Supplementary Material: Review every part. Relation To Broader Scientific Literature: The proposed method handles limitations of existing DP methods including DP fine-tuning on large models and prompt-based methods. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The whole paper is well written and easy to follow. Other Comments Or Suggestions: N/A Questions For Authors: See above reviews. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your encouraging remarks that our method is reasonable, our experimental design is both proper and comprehensive, and that our analyses are well-supported with most claims being appropriately justified. ### Privacy Budget Allocation Our budget allocation follows [1], our "Post-Generation Resample" baseline, which also has both DP-histogram and DP-finetune steps. For DP-histogram, we adopt the same -- adding a Gaussian noise 10 on every histogram item. The overall privacy budget (ε) is computed as a composition of both steps using the standard `dp_accouting` package. We also show individual ε values below on PubMed, under composed $ε=4,2,1$. DP-histogram only takes a small portion of the overall budget. This aligns with observations in [1] (page 5) and highlights our CTCL-topic design achieves strong performance while consuming only a small portion of the privacy budget. | ε (Composed) | ε (DP-Histogram) | ε (DP-Finetune) | |:-:|:-:|:-:| | 4 | 0.39 | 3.96 | | 2 | 0.39 | 1.94 | | 1 | 0.39 | 0.9 | We also find that the impact of changing the allocation is insignificant. For example, in $ε=4$ setting on PubMed, increasing histogram's noise from 10 to 20 only alters the DP-finetuning budget $ε$ marginally—from 3.96 to 3.99. [1] Privacy-Preserving Instructions for Aligning Large Language Models, ICML 2024 ### Generator design choices We use a seq2seq generator because its encoder is well-suited for understanding the conditions. For size, the 140M model is to balance the efficiency with generation ability, making it practical for resource-constrained scenarios. We agree other model selections might perform better, but we do not extensively explore all designs because our intuitive choice already demonstrates strong downstream performanceas in our experiments. Moreover, same as all pretraining research, due to the cost and scale, it could be super challenging to exhaustively search over all possible configurations. We would also like to emphasize that the core contribution of our work is the overall CTCL framework. Each module within the framework remains flexible and open to further design improvements. **140M model can learn 430M data well?** The large dataset is collected based on the common intuition that larger pretraining datasets typically lead to stronger models. The goal of pretraining is not for the open-ended pretraining task, but for a model with strong controllability supporting downstream tasks. Moreover, large data even on comparatively small models is a common practice. For instance, LLaMA-3 with 3B or 8B parameters are trained on 15T tokens, where the data samples also significantly exceeds the number of model parameters. ### Topic model design choices We choose Wikipedia for our topic model because it is large-scale, high-quality, and semantically diverse. In practice, it is rare to encounter texts that are entirely outside of its scope. The table below with sampled topics of our five diverse datasets illustrates that a broad range of universal topics are captured: | Dataset | Sample topic 1 | Sample Topic 2 | |--|--|--| | PubMed | microscopy/… | rna/gene/… | | Arena | gameplay/rpg/… | volcano/… | | Chat | comedian/… | oscar/… | | Yelp | restaurant/… | theater/… | | OpenReview | grammar/… | computational/… | The "out-of-domain" text is also considered and processed in our experiments--in our topic model, there is an “unclassified” bin for samples that are not close to any topic. The table below shows the details | Dataset | Training size | unclassified samples | Example | |-|:-:|:-:|--| | PubMed | 75,316 | 0 | | | Arena | 180,000 | 86 | i have no idea who am i, maybe you can help? | | Chat | 17,940 | 0 | | | Yelp | 1,939,290 | 4 | I am unable to provide check number. | | OpenReview | 8,396 | 0 | | We can see the number of unclassified samples is neglectable across all datasets, and those unclassified samples typically contain little to no substantive content, confirming the broad coverage of our topic model. ### Determine the number of clusters First, the number should not be too small, so as to keep the topic conditions meaningful. Second, it should not be too large, as that would amplify per-item DP-histogram noise in downstream tasks. To strike a balance, we set a threshold that each cluster must contain at least 100 out of 6M Wikipedia articles, resulting in 1,300 clusters. ### Quality guarantee of the proposed method Our framework introduces a universal topic model and controllability pretraining, which enables the model to capture valuable high-level, topic-wise information in the private domain—beyond the word-level representations learned by the vanilla DP-finetuning baseline. As a result, the data quality in our method should be guaranteed no worse than that of vanilla DP-finetuning. This quality improvement is empirically supported by the performance gains demonstrated in our experiments and further validated by our ablation study in Section 4.3.2. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I would like to keep my positive score. Good luck.
Summary: This paper presents CTCL, a framework that synthesizes privacy-preserving data by combining a lightweight 140M parameter generator with a clustering-based topic model. The generator is differentially privately fine-tuned on private data, while the topic model produces a DP topic histogram to capture high-level distributional information. This approach circumvents the computational burden of fine-tuning billion-scale LLMs and the need for extensive prompt engineering, proving effective across diverse domains including sensitive applications and generative tasks. Claims And Evidence: Yes. The submission’s claims are largely supported by a comprehensive set of experiments and analyses. It demonstrates that CTCL outperforms several baselines on both generative and classification tasks under differential privacy constraints, and the scalability studies show clear improvements with larger synthetic datasets and higher privacy budgets. Methods And Evaluation Criteria: Yes. The use of diverse benchmark datasets—ranging from academic medical texts to conversational dialogues and business reviews—ensures that the evaluation reflects real-world applications where privacy is critical. Moreover, evaluation metrics such as next-word prediction accuracy, perplexity, and MAUVE scores, along with thorough ablation studies and scalability experiments, provide robust evidence for the framework's effectiveness. Theoretical Claims: Yes. The algorithm is a chain of well-known DP algorithms. By the composition law of DP, everything is correct. Experimental Designs Or Analyses: NA Supplementary Material: All parts. Relation To Broader Scientific Literature: CTCL innovatively integrates a lightweight, 140M-parameter conditional generator with a clustering-based topic model to capture both high-level distributional information and fine-grained textual details. This combination not only circumvents the resource constraints associated with billion-scale models but also offers enhanced controllability and scalability, marking a significant advancement in privacy-preserving data synthesis. Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengths:** - The novel integration of a lightweight DP-finetuned generator with a clustering-based topic model enables efficient synthesis of high-quality synthetic data by capturing both fine-grained details and high-level distributional patterns. - The approach leverages well-established DP techniques to provide robust privacy guarantees without requiring extensive prompt engineering or resource-intensive fine-tuning of large-scale models. **Weakness:** - My main concern is that paper lacks evaluation of the proposed approach against practical privacy attacks that might occur in real-world scenarios. So while the paper shows good utility improvements, its privacy aspects remain insufficiently evaluated. Other Comments Or Suggestions: NA Questions For Authors: See weaknesses part. If the authors can show at least one type of attack (either a reconstruction attack or an attribute inversion attack), it would strengthen the paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your supportive and detailed comments! We are encouraged by your feedback that our approach is novel, our experiments and analyses are comprehensive, and that our method makes significant advancements in privacy-preserving data synthesis. ### Privacy Attacks According to the post-processing property of differential privacy (DP), our synthetic data has the same formal DP guarantees—$\epsilon=1,2,4$—as our generator, and these privacy budgets are widely accepted in the machine learning community, with $\epsilon < 10$ considered reasonable and $\epsilon \approx 1$ regarded as offering strong privacy protection ([Ponomareva et al. 2018](https://arxiv.org/abs/2303.00654)). DP already offers provable protection against membership inference and other, typically weaker, attacks such as reconstruction or attribute inversion attacks. This is supported by extensive empirical evidence, e.g., [Steinke et al. 2023](https://arxiv.org/abs/2305.08846), [Andrew et al. 2023](https://arxiv.org/abs/2302.03098), and [Nasr et al. 2023](https://arxiv.org/abs/2302.07956). Furthermore, recent works such as [Yue et al. 2023](https://arxiv.org/abs/2210.14348) and [Yu et al. 2024](https://arxiv.org/abs/2402.13659)’s experiments on secret sharer-style attacks ([Carlini et al. 2018](https://arxiv.org/abs/1802.08232)) in the DP synthetic data setting further support the privacy protection offered by DP. As practical attacks are usually weaker than the worst-case attacks that DP is designed to protect against, based on empirical privacy auditing and attack studies in recent years, we skipped auditing in this paper. That being said, we would be happy to include relavant experiments if you have a specific attack in mind that is beyond our consideration and feasible in our setting!
Summary: This paper introduces CTCL (Data Synthesis with Controllability and Clustering), a framework to generate privacy-preserving synthetic text data without fine-tuning LLMs or extensive domain-specific prompt engineering. The CTCL framework consists of two primary components: a lightweight 140M-parameter conditional text generator and a universal clustering-based topic model, both pretrained on publicly available datasets. To adapt to private domains, the generator undergoes differential privacy (DP) finetuning to capture fine-grained textual details, while the topic model constructs a DP-protected histogram to represent high-level distributional information. Synthetic data generation then leverages this DP histogram to control topic distribution during sampling. Empirical evaluations demonstrate that CTCL outperforms prior methods, such as Aug-PE, in benchmarks including PubMed and Chatbot Arena, particularly under tight privacy budget. Claims And Evidence: The claims made in the submission are generally well-supported by clear experiments and detailed analyses. The authors provide extensive experimental comparisons against several established baseline methods across multiple downstream tasks—both generative and classification—and demonstrate improvements, particularly under strict DP budget constraints. Methods And Evaluation Criteria: The proposed methods make sense conceptually. The evaluation criteria follow prior works in DP finetuning and DP data synthesis. Theoretical Claims: N/A Experimental Designs Or Analyses: I reviewed the experiment design, especially the validation dataset. I like the inclusion of open-ended tasks in Chatbot Arena, which are usually not present in other LLM DP fine-tuning papers. Supplementary Material: I skimmed through the supplementary material. Relation To Broader Scientific Literature: The contributions of this paper build specifically upon prior work in privacy-preserving data generation and differential privacy (DP) in LLMs. By integrating a lightweight (140M-parameter) conditional text generator with a universal topic model pretrained on public corpora, CTCL efficiently synthesizes high-quality text data under strict DP constraints without extensive prompt engineering or significant computational resources. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strength** The approach to synthesizing privacy-preserving data using a lightweight language model and clustering-based topic model is **Weakness** + More design choices of the conditional generator are not explored. + The evaluation primarily focuses on quantitative benchmarks, with limited qualitative analysis of generated synthetic data. Additional qualitative analysis could help the reader further understand the practical usefulness and readability of the synthetic data. Other Comments Or Suggestions: Please see the questions below. Questions For Authors: 1. Since the proposed framework requires training a clustering-based topic model and a generator LLM on Wikipedia, how sensitive is it to domain mismatch between Wikipedia topics and the private datasets? Are there failure cases where this approach may struggle from domain mismatch? 2. Could the authors clarify how the DP budget was allocated between the DP Topic Histogram and DP Finetuning? 3. For the sample generation (4.1.3), the authors used nucleus sampling with top-p=0.95. How sensitive is the quality of generated data to the choice of sampling parameters? 4. What’s the rationale for using a 140M BART-like model? I think the paper can be stronger if more design choices for the generator are explored. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your encouraging remarks that our claims are well-supported by clear experiments and detailed analyses, and our proposed method is reasonable. ### Generator design choices We use a seq2seq generator because its encoder is well-suited for understanding the conditions. For size, the 140M model is to balance the efficiency with generation ability, making it practical for resource-constrained scenarios. We agree other design choices might perform better, but we do not extensively explore all designs because our intuitive choice already demonstrates strong downstream performanceas in our experiments. Moreover, same as all pretraining research, due to the cost and scale, it could be super challenging to exhaustively search over all possible configurations. We would also like to emphasize that the core contribution of our work is the overall CTCL framework. Each module within the framework remains flexible and open to further design improvements. ### Qualitative Analysis Thank you for the suggestion! We will include more qualitative examples in the next version of the paper in addition to those in Table 6. As noted in Section 4.3.3, we emphasize qualitative results because our primary usage of synthetic data is to support downstream model development. Therefore, the surface form of the synthetic text is less critical. For instance, Table 6 shows that PE-based generation using ChatGPT consistently produces fluent outputs. However, its downstream model performance is only comparable to (and sometimes worse than) that trained on less fluent synthetic data generated by DP-finetuned BART-base. ### Topic model design choices We choose Wikipedia for our topic model because it is large-scale, high-quality, and semantically diverse. In practice, it is rare to encounter texts that are entirely outside of its scope. The table below with sampled topics of our five diverse datasets illustrates that a broad range of universal topics are captured: | Dataset | Sample topic 1 | Sample Topic 2 | |--------------------|-----------------------|------------------------| | PubMed | microscopy/electron/… | rna/mir/gene/… | | Chatbot Arena | gameplay/rpg/wii/… | eruptions/volcano/… | | Multi-Session Chat | comedian/presenter/… | oscar/nominations/… | | Yelp | restaurant/diner/… | theater/cinemas/… | | OpenReview | grammar/syntax/… | computational/turing/… | The "out-of-domain" text is also considered and processed in our experiments--in our topic model, there is an “unclassified” bin for samples that are not close to any topic. The table below shows the details | Dataset | Training size | unclassified samples | Example | |-|:-:|:-:|--| | PubMed | 75,316 | 0 | | | Chatbot Arena | 180,000 | 86 | i have no idea who am i, maybe you can help? | | Multi-Session Chat | 17,940 | 0 | | | Yelp | 1,939,290 | 4 | I am unable to provide check number. | | OpenReview | 8,396 | 0 | | We can see the number of unclassified samples is neglectable across all datasets, and those unclassified samples typically contain little to no substantive content, confirming the broad coverage of our topic model. ### Privacy Budget Allocation Our budget allocation follows [1], our "Post-Generation Resample" baseline, which also has both DP-histogram and DP-finetune steps. For DP-histogram, we adopt the same -- adding a Gaussian noise 10 on every histogram item. The overall privacy budget (ε) is computed as a composition of both steps using the standard `dp_accouting` package. We also show individual ε values below on PubMed, under composed ε=4, 2, or 1. DP-histogram only takes a small portion of the overall budget. This aligns with observations in [1] (page 5) and highlights our CTCL-topic design achieves strong performance while consuming only a small portion of the privacy budget. | ε (Composed) | ε (DP-Histogram) | ε (DP-Finetune) | |:-:|:-:|:-:| | 4 | 0.39 | 3.96 | | 2 | 0.39 | 1.94 | | 1 | 0.39 | 0.9 | We also find that the impact of changing the allocation is insignificant. For example, in ε=4 setting on PubMed, increasing histogram's noise from 10 to 20 only alters the DP-finetuning budget $ε$ marginally—from 3.96 to 3.99. ### Nucleus Sampling Our choice of nucleus sampling with top-p = 0.95 follows the standard practice introduced in [2], which has been shown to provide a strong balance between diversity and coherence, outperforming alternatives such as top-k or beam search. Also, since this standard hyperparameter setting already yields strong performance, we chose not to exhaustively tune this particular top-p value. [1] Privacy-Preserving Instructions for Aligning Large Language Models, ICML 2024 [2] The Curious Case of Neural Text Degeneration, ICLR 2020. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response. I will keep my score as this is already very positive, but I believe that the follow-up experiments further strengthen this paper.
Summary: The paper proposes a novel framework for generating privacy-preserving synthetic text data called CTCL. The authors claim that previous works mainly utilized fine-tuning the LLMs with differential privacy, which is computationally expensive or relies on extensive prompt engineering, which is time-consuming and does not necessarily result in a good performance. Instead, CTCL includes several partitions to guarantee better performance and privacy at the same time. First, they use public data (here Wikipedia) to train a generator. Then using DP fine-tuning they adapt the generator to private data. The generator then synthesizes data guided by this DP histogram information of data. Evaluations across various domains demonstrate CTCL's effectiveness, especially under stringent privacy conditions, and its superior scalability compared to existing techniques. Claims And Evidence: YES Methods And Evaluation Criteria: YES Theoretical Claims: Not Applicable Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: * Improving the state-of-the-art DP-synthetic data generation. The synthetic data can later be used in the fine-tuning of models to preserve privacy and avoid the problems of directly using DP-finetuning. * Instead of directly using prompt engineering to synthesize the data, they utilize the LLM to cluster the data. In this way, the performance is not directly dependent on the prompts or models. Essential References Not Discussed: NO Other Strengths And Weaknesses: STRENGTHS: * The paper is very well-written. Motivation, prior works, and method intuitions are adequately discussed. * The paper correctly identifies the shortcomings of the SOTA method (PE) and shows it in the experiments. (The most interesting result is that the PE's performance is primarily independent of the privacy budget). * The final generator has only 140M parameters, which is efficient in fine-tuning and inference. * The effort toward **reducing** the reliance on prompt engineering is appreciated. WEAKNESS: * There is an assumption that the public data can capture the primary information of the private text. * The computational cost of the methods is missing. Some components seem very complex and expensive (although they are done once, it is good to understand the complexity and performance trade-off). * The framework is complex; the authors have explained it in Figure 1, but a pseudo-code description can help the readers understand each step better. Other Comments Or Suggestions: * There should be a cost comparison between the CTCL and the prior works. PE only used the model API to generate the synthetic data, but the CTCL pipeline requires training and fine-tuning multiple components. * The authors have shown that changing the model API can change the performance of the PE method. The same analysis is applicable to the use of Gemma 2 in CTCL. Questions For Authors: * The limitation section is missing. * How is the method performed for out-of-domain private data? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of the adequate discussion in our paper writing, the accurate identification of limitations in prior approaches, the efficiency of our lightweight final model, and our efforts to reduce reliance on prompt engineering. ### Public Data capturing private domain info. We agree a key intuition behind our design is the information learned from large-scale, diverse public data can benefit downstream learning in private domains. This aligns with the principle behind the pretraining–finetuning paradigm adopted across most LLM research. ### Topic model design choices We choose Wikipedia for our topic model because it is large-scale, high-quality, and semantically diverse. In practice, it is rare to encounter texts that are entirely outside of its scope. The table below with sampled topics of our five diverse datasets illustrates that a broad range of universal topics are captured: | Dataset | Sample topic 1 | Sample Topic 2 | |--|--|--| | PubMed | microscopy/… | rna/gene/… | | Arena | gameplay/rpg/… | volcano/… | | Chat | comedian/… | oscar/… | | Yelp | restaurant/… | theater/… | | OpenReview | grammar/… | computational/… | The "out-of-domain" text is also considered and processed in our experiments--in our topic model, there is an “unclassified” bin for samples that are not close to any topic. The table below shows the details | Dataset | Training size | unclassified samples | Example | |-|:-:|:-:|--| | PubMed | 75,316 | 0 | | | Arena | 180,000 | 86 | i have no idea who am i, maybe you can help? | | Chat | 17,940 | 0 | | | Yelp | 1,939,290 | 4 | I am unable to provide check number. | | OpenReview | 8,396 | 0 | | We can see the number of unclassified samples is neglectable across all datasets, and those unclassified samples typically contain little to no substantive content, confirming the broad coverage of our topic model. ### Privacy Budget Allocation Our budget allocation follows [1], our "Post-Generation Resample" baseline, which also has both DP-histogram and DP-finetune steps. For DP-histogram, we do the same -- adding a Gaussian noise 10 on every histogram item. The overall privacy budget (ε) is computed as a composition of both steps using the standard `dp_accouting` package. We also show individual ε values below on PubMed, under composed $ε=4, 2, 1$. DP-histogram only takes a small portion of the overall budget. This aligns with observations in [1] (page 5) and highlights our CTCL-topic design achieves strong performance while consuming only a small portion of the privacy budget. | ε (Composed) | ε (DP-Histogram) | ε (DP-Finetune) | |:-:|:-:|:-:| | 4 | 0.39 | 3.96 | | 2 | 0.39 | 1.94 | | 1 | 0.39 | 0.9 | We also find that the impact of changing the allocation is insignificant. For example, in $ε=4$ setting on PubMed, increasing histogram's noise from 10 to 20 only alters the DP-finetuning budget $ε$ marginally—from 3.96 to 3.99. [1] Privacy-Preserving Instructions for Aligning Large Language Models, ICML 2024 ### Computational Cost Our pretraining has 3 steps: 1. 2B LLM inference on a 430M documents. 2. Training a 140M LM on 430M (condition, document) pairs. 3. Generating embeddings for 6M Wikipedia docs with a 20M embedding model, followed by a HDBSCAN clustering. Among these, Step 1 dominates the computation due to the 2B LLM. This remains lighter than or comparable to the computation of pretraining a 2B LLM. Notebly, we will release our pretrained models, so users don't take the pretraining cost. This corresponds to the pretraining of the LLMs behind the APIs in PE-based approaches. For the downstream stage, PE methods often have significant user-side costs. For instance, AUG-PE synthesizes 2,000 samples in T=10 evolution iterations, each with L=4 variations, totaling 80K (2K x 10 x 4) ChatGPT requests. This represents a substantial API cost, whereas our approach applies finetuning on the lightweight model on local devices, which has no significant cost overhead. ### Use of Gemma As shown in our paper Table 1 and mentioned in Sec 3.1, unlike PE’s reliance on strongest LLMs like ChatGPT, our framework only requires basic instruction-following ability of the LLM, since aspects are extracted from existing documents without relying on LLM’s creativity. Notably, the LLM we use, Gemma-2-2B, is already one of the smallest LLMs. Larger models, such as its 9B and 27B variations, would likely yield even stronger results. We don’t experiment with more LLM choices because the small 2B LLM already has strong performance. We would also like to emphasize that the core contribution of our work is the overall CTCL framework. Each module design within the framework remains flexible and open to further design improvements. ### Pseudo-code and Limitations section Thank you for the suggestions! While ICML does not require a Limitations section, we will include both our pseudo-code and a limitation section into our next paper version for improved clarity.
Summary: This paper introduces a new method CTCL to enable efficient synthetic data generation through privately fine-tuning a controlled generation LM. CTCL has two stages, first learning the general topics of private data through DP histogram learning and then privately fine-tunes a language model given the topics as the condition. CTCL only requires a LM of size 140M which is considerably smaller than existing on-device LLMs in the scale of billions. Claims And Evidence: The claims are supported by the evidence under the assumptions that the author made. Methods And Evaluation Criteria: The evaluation criteria makes sense to demonstrate the quality of synthetic data. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments are thorough to demonstrate the effectiveness of the proposed approach. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper is related to private synthetic data generation which is an important topic in the age of LLMs. The method targets resource-limited scenarios which match closely with real-world deployment. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and motivated, privately generating high quality synthetic data with a small model is desirable especially considering fine-tuning on the private data on edge devices which have limited resources. 2. The experiments are thorough with multiple baseline approaches, tasks, and datasets considered. Weaknesses: 1. It is unjustified why pre-training from scratch is needed for the LM part. Can one start from a pre-trained smaller language model with a similar number of parameters? Also, would knowledge distillation from an LLM into a smaller LM be enough? 2. There is an under-explained discrepancy between how condition is generated in the pretraining and fine-tuning stage. In the pre-training stage, the condition is extracted for each document from Gemma 2B, while for private data, the condition is from the topic model. Why not also use the topic model to generate the condition for the pretraining data as well? Or why not use Gemma 2B to extract aspects for the private data and learn those aspects privately? Other Comments Or Suggestions: N/A Questions For Authors: 1. The generated conditions seem to be helpful for other methods as well, e.g. for private evolution, the conditions can be used for engineering the prompts for variations. Have the authors considered this as an alternative? 2. The topic model is trained on Wikipedia, how would it work when the topics from private data are personal and out of the domain of Wikipedia? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and encouraging feedback that the problem we investigate is important, that our method is well-motivated and matching real-world applications, that our experiments are thorough, and that our paper is well-written! ### Pretraining We would like to clarify that our pretraining *does not* start from random initialization. As noted in Sec. 3.1, we perform *continual pretraining* on top of BART-base [1], a model already pretrained on a general corpus and it possesses basic language understanding abilities. Our continual pretraining is to endow the model with controllability. We agree that knowledge distillation could be an alternative to our pretraining, while it still falls within our overall framework, as it also needs properly constructed data in order to achieve controllability. Moreover, it should require more meticulous design choices and more computation due to the large teacher model. We would also like to emphasize that the core contribution of our work is the overall CTCL framework. Each module design within the framework remains flexible and open to further optimization and design improvements. [1] BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension, ACL 2020 ### Discrepancy in constructed conditions between pretraining and finetuning **Why not topic model for pretraining?** The distinction between pretraining and finetuning conditions is intentional for two purposes: enhancing model generalizability and adhering to privacy constraints. In pretraining, using conditions from 2B LLM is for the diversity and flexibility of the aspects that the model can handle, enabling stronger controllability. Otherwise, “keywords” would become the only aspect in the pretraining. **why not Gemma 2B for private data?** If Gemma 2B were applied directly to private data, the extracted conditions would be in free-text format. This raises our concerns about sending the private information to the synthetic data generation process. In contrast, our framework ensures differential privacy by using a noised, topic-wise histogram instead. This preserves meaningful high-level topic information of the private domain while satisfying privacy requirements. To demonstrate the value of the flexible conditioning in pretraining, consider one may have a bit prior knowledge about the downstream domain, e.g., knowing the data is dialogues. In our framework, such prior knowledge can be incorporated, providing a better initialization for DP-finetuning. Below are the model generations controlled by doc type using the same keywords (*Surfing, World Championship, Young Athletes*), showcasing the better initialization: * (Doc Type: News): *A member of WSA and WSL, won the World Championships for the first time in his career. [...]* * (Doc Type: Chat): *How do you feel about the current state of surfing in the U.S. right now? \n There are a lot of great young athletes [...]* * (Doc Type: Blog Post): *I've been fortunate enough to have the opportunity to meet the most talented young athletes [...]* This flexible design also intends to encourage broader and more creative uses of our model. ### Adapting condition generation to PE methods We agree our condition generation could be potentially adapted by PE, for example, within the variation prompts. However, as noted in the paper, a key limitation of PE methods is not learning fine-grained info from private data. Applying our condition generation into the PE framework might not address this key limitation of theirs. ### Topic Model on Wikipedia and Out-of-Domain Generalization We choose Wikipedia for our topic model because it is large-scale, high-quality, and semantically diverse. In practice, it is rare to encounter texts that are entirely outside of its scope. The table below with sampled topics of our five diverse datasets illustrates that a broad range of universal topics are captured: | Dataset | Sample topic 1 | Sample Topic 2 | |--|--|--| | PubMed | microscopy/… | rna/gene/… | | Arena | gameplay/rpg/… | volcano/… | | Chat | comedian/… | oscar/… | | Yelp | restaurant/… | theater/… | | OpenReview | grammar/… | computational/… | The "out-of-domain" text is also considered and processed in our experiments--in our topic model, there is an “unclassified” bin for samples that are not close to any topic. The table below shows the details | Dataset | Training size | unclassified samples | Example | |-|:-:|:-:|--| | PubMed | 75,316 | 0 | | | Arena | 180,000 | 86 | i have no idea who am i, maybe you can help? | | Chat | 17,940 | 0 | | | Yelp | 1,939,290 | 4 | I am unable to provide check number. | | OpenReview | 8,396 | 0 | | We can see the number of unclassified samples is neglectable across all datasets, and those unclassified samples typically contain little to no substantive content, confirming the broad coverage of our topic model.
null
null
null
null
Idiosyncrasies in Large Language Models
Accept (poster)
Summary: This paper explores how distinguishable the outputs of different LLMs are from each other. The main results demonstrate that the LLM to produce a given output can easily be predicted by a trained classifier. Further experiments show commonly-used phrases and other idiosyncrasies that differentiate LLMs from one another. Ultimately, the authors conclude that LLM outputs convey deeply-seated differences in their use of language, despite frequently being trained on common data sources. Claims And Evidence: Claims appear to be supported. Methods And Evaluation Criteria: Most of the results only use one dataset per group of LLMs, as stated in column 2 lines 141-148. These results could change significantly when prompted using a different dataset. Theoretical Claims: N/A Experimental Designs Or Analyses: - Many results are surprising, such as the high degree of distinguishability between LLMs (Section 3.1) and that the distinguishability is retained even when the outputs are paraphrased or translated by another LLM (Section 4.3). - Some experiments seem a little superfluous or could be designed to be more interesting. For example, the “Generalization to out-of-distribution responses” in Section 3.1 or the length control aspect of “Prompt-level interventions” experiment in Section 3.2 do not seem to lead to very useful takeaways. Comparatively, it might be interesting to see if the LLMs become less distinguishable when limiting all the LLMs to a restricted, common set of vocabulary during sampling (especially one without their most-commonly used phrases shown in Figure 5). Similarly, the “Sampling methods” experiment in Section 3.2 may instead be more interesting by investigating if the sampling method impacts how recognizable different LLMs are from each other. In other words, repeat the Table 1 main results experiment using top-k sampling, and again for top-p sampling, etc. Supplementary Material: I do not see any supplemental material Relation To Broader Scientific Literature: - The perspective of the paper is different from many other LLM papers, leading to some findings that may be a useful contribution to the LLM research community. - The “Implications” section does not expand very much on broader implications of the paper, but rather introduces additional experimental results. Moving these additional results to the appendix would allow the authors to better clarify the importance of their work to readers in this section. Essential References Not Discussed: None I am aware of Other Strengths And Weaknesses: - Some minor grammatical errors and other issues: for example, the list of LLM types in 3.1 refers to base LLMs mentioned above, when they are the last item in the list - Figure 4 (and the others of this style) is very useful for results visualization, but looks a little blurry and difficult to read in print. Other Comments Or Suggestions: - Some minor grammatical errors and other issues: for example, the list of LLM types in 3.1 refers to base LLMs mentioned above, when they are the last item in the list - Figure 4 (and the others of this style) is very useful for results visualization, but looks a little blurry and difficult to read in print. Questions For Authors: Similarly to the point I mentioned earlier, would you be able to share results of how distinguishable the LLMs are from each other (as in Table 1) when using each sampling method? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. We are happy to address your concerns. - **Different prompt datasets** In our experiments, we primarily use the UltraChat dialogue dataset to generate responses from instruction LLMs and Chat APIs, and the high-quality pretraining dataset FineWeb for base LLMs. Note that these datasets are already diverse by construction. In addition, in Table 3 of our submission, we have presented results with other prompt datasets (e.g., Cosmopedia, LmsysChat, and WildChat). We find the classifier trained on each of them can achieve >90% accuracy (see diagonal entry). This indicates our results are robust across prompt datasets. - **Takeaway of “generalization to OOD responses” and “length control”** The goal of these two experiments is to study idiosyncratic behavior in controlled experiments. The results of “generalization to OOD responses” show that our classifiers capture robust distinguishable features, indicating that our observation is general and not dataset specific. The “length control” experiment is to make sure that the high classification accuracy is not due to the short-cut solution which is simply based on the response length. - **Sampling with restricted, common set of vocabulary (especially without characteristic phrases)** We restrict the LLMs to a common set of K words during response generation by only sampling relevant tokens composing each word and special character along with special tokens for each LLM. In addition, we do not allow the LLM to output characteristic phrases in Figures 5 and 13 by setting relevant logits to -inf. Below are the classification results of instruct LLMs using the above sampling strategy: | | Llama vs Gemma | Llama vs Qwen | Llama vs Mistral | Gemma vs Qwen | Gemma vs Mistral | Qwen vs Mistral | four way | |-----|-----|-----|-----|----- |-----|-----|----- |all words | 99.9 | 97.8| 97.0 | 99.9| 99.9|96.1 |96.3| |K=2000|98.4| 97.1|99.5 | 99.4 | 99.5 | 98.6|96.9 | |K=4000| 99.1 | 97.5|98.5 | 99.4 | 99.6 | 98.0 | 97.1| The results show that even restricting to a common set of vocabulary, our classifiers can still predict LLM identity with high accuracy. - **Main results with top-k and top-p sampling** We use the softmax sampling (T=0.8), top-k sampling (k=100), top-p sampling (p=0.9) to generate responses for instruct LLMs and report their results below: | | Llama vs Gemma | Llama vs Qwen | Llama vs Mistral | Gemma vs Qwen | Gemma vs Mistral | Qwen vs Mistral | four way | |-----|-----|-----|-----|----- |-----|-----|----- |greedy | 99.9 | 97.8| 97.0 | 99.9| 99.9|96.1 |96.3| |softmax (T=0.8) | 99.2| 97.3 | 96.1| 99.5| 99.5 |95.7 |95.2| |top-k (k=100) | 99.3| 96.8 |96.2 |99.5 |99.5 | 96.1|95.1 | |top-p (p=0.9)| 99.2 | 97.2 | 96.4 |99.6 |99.6 |95.9|95.5| We observe that using different sampling methods does not alter the results very much (often <2%). - **Implication too specific and does not expand to broader impact** We acknowledge that the current implication section contains too many experiments. In the next revision, we will focus more on discussing and highlighting the broader impact of our results in this section. Besides our analysis on synthetic data and model similarity, we also want to mention another implication on privacy and safety. One interesting example would be that the distinguishability of AI models could allow a malicious party to manipulate voting based evaluation leaderboard, e.g., Chatbot Arena. As the identity of the model could be determined from the texts, such an adversary would be able to consistently vote for a desired target model. - **Grammar errors and LLM type order** Thank you for pointing this out. We have corrected it in the current version of our paper. We will run a systematic grammar check to proofread the paper. - **Blurry figure** Thank you for raising this concern. The font size is indeed a little bit small there. We will increase the font size to match that of the figure captions to ensure the results are more readable.
Summary: This paper studies the question of how idiosyncratic the responses of different LLMs are. The authors frame this as a classification task where models are trained to predict which LLM (among a fixed set) generated a particular output. The experimental results show that these classifiers have high accuracy, >95% (compared to a base rate of 25%). The paper then studies specific characteristics that contribute to these idiosyncracies. For example, they find differences in word-level distributions across models, even at the unigram level. They also show that LLM responses can be predicted when responses are rewritten, translated, or summarized. The paper concludes by discussing implications of the results for synthetic data training and using their metrics as a measure of model similarity. Claims And Evidence: The paper's central claim is that which LLM has written a given text can be easily predicted. In one sense, this claim is very well supported: - Extensive analysis shows compelling results that LLM outputs are very predictable (when conditioning on up to 4 LLMs). - Creative range of experiments from different prompts and text shuffling - The qualitative analysis was nice, focusing on different frequencies of words and letters. The characteristic phrases analysis was also interesting. However, in another sense, the setting studied by the authors is very narrow with unclear implications: - The setting -- conditioning on knowing the text is from an LLM and that it's one of 2 or 4 known models -- is contrived. When are we going to come across this in the real-world? As the authors note, this is different from testing whether a response is LLM-generated or not. Why not a larger classification setting that considers many models? - The first point in the implications section -- that fine-tuning two different models on the same synthetic data makes their answers harder to distinguish -- is unclear. Why does the data have to be synthetic? This seems like this could be a point about models being trained on different data being easier to differentiate. Wouldn't we have a similar result when models are trained on non-synthetic data? In other words -- why is the synthetic nature of the data important to this claim? - The second point in the implication section is that the predictability metric can be used as a metric of model similarity. While this is interesting, there isn't much discussion or results about why this would be a useful metric. Is this capturing something that other model similarity metrics aren't capturing? And is there evidence for this? Methods And Evaluation Criteria: The experimental methodology is generally sound, although some details aren't clearly described. For example, the paper doesn't clearly explain how held-out sets are constructed (e.g. whether responses to the same prompt can appear in both train and test set). See the "Other strengths and weaknesses" section for more discussion. See also the points in "Claims and Evidence" Theoretical Claims: The paper doesn't make substantial theoretical claims requiring proofs. Experimental Designs Or Analyses: See responses to "Methods and Evaluation Criteria" and "Other Strengths and Weaknesses" Supplementary Material: I scanned the tables and looked to see if more experimental setup details were included, which I did not see. Relation To Broader Scientific Literature: The paper positions itself well within the literature on dataset bias and human vs. machine-generated text detection. However, it could connect to the LLM monoculture literature -- see "Other Strengths and Weaknesses". Essential References Not Discussed: No Other Strengths And Weaknesses: As mentioned above, the biggest weakness of the paper is the unclear implications. A couple of suggestions for improvement below: - One possible implication worth exploring is the LLM monoculture literature. How do these results inform our understanding of LLM monoculture, showing e.g. that LLMs make similar mistakes in answering questions or behave similarly when recommending jobs? - More analysis about what makes pairs of models more predictable would be interesting. E.g. how important is the training data overlap (for models where training data is known)? The other main drawback of the paper was unclear writing: - There's not enough detail in the second paragraph of Section 3 to make the experimental setup clear. E.g. how are the held-out sets constructed? Can responses to the same prompt appear in both the train and held-out sets? Does the classifier condition on prompt? What's the total number of prompts in each dataset? These details aren't in the appendix either. - The tables are very confusing. E.g. what is Table 4 reporting, and what are the units? For many of the tables (e.g. Figure 2, Table 8) it's not clear what's the base rate -- are these all 4-way classification? - The description of LLMs in lines 125-129 is too strict (i.e. that they "all utilize the Transformer architecture with self-attention" and "trained using an auto-regressive objective"). For example we're beginning to have diffusion-based LLMs ([1] for code specifically, [2] more recently as a general purpose LLM) [1] Singh, Mukul et al. "CodeFusion: A Pre-trained Diffusion Model for Code Generation". [2] https://www.inceptionlabs.ai/news Other Comments Or Suggestions: See above Questions For Authors: See above ===================== POST-REBUTTAL UPDATE ===================== I thank the authors for addressing my comments. In response I raised my review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. We are happy to address your concerns. - **Contrived setting** We choose this setting for several reasons: 1. We are motivated by prior works [1,2] showing the bias in computer vision datasets and thus adopt a standard classification setup to study the differences of LLMs. 2. It helps us evaluate and understand model idiosyncrasies systematically. In section 3 and 4 of our submission, we have designed many analysis experiments around our classification framework, from which we draw many valuable insights. 3. Our choice of 4 LLMs per classification group (4 base models together and similarly for instruct models and chat APIs) is mostly for fair comparisons. We acknowledge that our setup may not consider scenarios where the list of source LLMs could be large and even unknown beforehand. To address this, we conduct a classification experiment with more LLMs (10 in total): ChatGPT, Claude, Grok, Gemini, DeepSeek, Llama, Gemma, Mistral, Qwen and Phi-4, achieving 92.2% accuracy. This indicates our results can be useful in a practical setup with more models. We will include these discussions and new result in our next revision. - **Synthetic data** This result is indeed not specific to synthetic data. We verified fine-tuning LLMs on two non-synthetic datasets yield distinguishable models, e.g., 99.1% for GSM8k versus Math; 98.9% for MMLU versus ARC. We focus on synthetic data in the paper since synthetic data becomes a popular use case of LLM generated outputs. For example, UltraChat dataset consists of mostly GPT-4 responses. Thus we want to highlight this idiosyncratic behavior of training on synthetic data where idiosyncrasies of teacher models can be inherited. We will clarify this point in our next revision. - **Model similarity** Two recent papers have proposed other model similarity metrics, [3] and [4], which we have cited in our submission. Specifically, [3] focused on “vibe”-based similarities based on tones and writing styles. [4] proposed a metric to measure statistical differences between models and their quantized, watermarked or fine-tuned versions. [3]’s method can be extensive and biased as it uses a LLM judge to obtain the vibe scores. [4] is more suitable to detect small changes occurred to one model. Our metric is more general than [3] (not limited to vibes), as we show in Section 4. As opposed to [4], we focus mainly on comparing two different models but not variations of one model. - **LLM monoculture** A recent study that explores LLM monoculture [5] finds that LLM-generated book reviews are more positive than human-written ones. Following their setup, we generate book reviews using Llama-2-7b-chat and Vicuna-13b-v1.5 and find these responses can be classified with near-perfect accuracy. Note that our results are not contradictory with [5] as it is possible to express similar sentiments with different stylistic / lexical choices. Therefore, our work offers a tool to capture idiosyncrasies that characterize LLM monoculture. - **Training data overlap** We added experiments to study the effect of training data overlap. We split GPT-2 pretraining dataset OpenWebText into three parts: a, b and c. We start from training two models on data splits a and b. Then we replace 33% or 66% percent from split a and b with data from split c, which makes the resulting splits overlapping. The classification results on the trained models are: |overlapping between a and b|accuracy| |-|- |0% c|86.4| |33% c|86.4| |67% c|79.7| |100% c|72.6| As the overlapping increases, the accuracy to classify two trained models is lower, suggesting that pretraining data overlap affects idiosyncrasies. - **Experiment setup** Thanks for pointing this out. We construct the held-out sets by sampling 1K prompts and use the responses generated from these fixed 1k prompts. The prompts for generating the train and held-out sets are separate. We do not include the prompt in the input to the classifier but we find this choice little affects our results. The total number of prompts in UltraChat we sampled from is around 200K. We will add these details in our next revision. - **Confusing tables** In our submission, we clarified at the start of Section 3.2 that “From now on, we report the accuracy of the four-way classification task.”. The base rates would be the results under “original”. We are sorry for the confusion and will make these clearer. - **Strict description of LLMs** We were meaning that we consider Transformer-based LLMs in our paper. We will rephrase this and add related references on diffusion-based LLMs. [1] A Decade’s Battle on Dataset Bias: Are We There Yet?. ICLR 2025 [2] Unbiased look at dataset bias. CVPR 2011 [3] VibeCheck: Discover and Quantify Qualitative Differences in Large Language Models. ICLR 2025 [4] Model Equality Testing: Which model is this API serving? ICLR 2025 [5] Generative Monoculture in Large Language Models. ICLR 2025 --- Rebuttal Comment 1.1: Comment: I appreciate the rebuttal from the authors. My main concerns have been addressed so I will update my score to 3 --- Reply to Comment 1.1.1: Comment: We appreciate your helpful review and discussion about our paper! We are glad that your main concerns have been resolved. We will incorporate above experiment results and your suggestions about writing into the next version of our draft.
Summary: This paper shows that deployed large language models have idiosyncracies in their output that makes it possible to distinguish which model generated a particular piece of text. These idiosyncracies seem to transcend the surface structure of the output, persisting after text is shuffled in various ways and even after rewriting. Claims And Evidence: The primary claims in the paper are well-supported by the results: classification performance is high across the models considered, and even within model families. The authors checked that their classifiers generalize out of distribution and conducted a number of manipulations to the text to try to identify the factors that were making it possible to classify the output of the models. The claims are primarily focused on being able to classify the output well, and the models perform far better than chance although no inferential statistics are presented to support this. It would be good to add error bars to the figures. Methods And Evaluation Criteria: The methods and evaluation criteria are reasonable for the questions being asked. Some of the methods weren't particularly clear in the main text of the paper. For example, the primary analysis presented on page 3 is based on a classification pipeline that is only explained in the Appendix. Given the centrality of this analysis more details in the paper would be helpful. Theoretical Claims: There were no theoretical claims that required assessment -- the primary claims in the paper are empirical. Experimental Designs Or Analyses: The basic design of the experiments was sound, and the manipulations of the output that were presented in Section 4 were clever and well designed. Supplementary Material: I read the Appendix for the additional details on the experimental methods and additional results. Relation To Broader Scientific Literature: This work situates itself well in the broader literature on large language models, as well as the literature on dataset classification in machine learning. The question that is being asked seemed novel to this paper. I think this question is interesting but not necessarily something that will lead to further technical innovations for large language models -- it seems more motivated by curiosity about the behavior of these models. The most interesting finding was the semantic depth of the idiosyncracies, which suggests that there are relatively deep differences resulting from small changes in architecture and training regimes. Essential References Not Discussed: I did not identify any essential references that were missing. Other Strengths And Weaknesses: The primary weakness of this paper is that it doesn't have a great deal of technical depth -- it primarily uses off the shelf models and classification techniques. However, this isn't the focus of the paper so I don't think this is a major weakness. Other Comments Or Suggestions: p 3. The order of Base LLMs and Instruct LLMs in the numbered list should be exchanged. p 6. "Charateristic" -> "Characteristic" Questions For Authors: No questions, in general this was a clearly written paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback and recognition of the novelty of the question our paper seeks to answer. We are happy to address your concerns. - **The claims are primarily focused on being able to classify the output well, and the models perform far better than chance although no inferential statistics are presented to support this. It would be good to add error bars to the figures.** Here we report the classification performance on responses from four instruct LLMs, using prompts (10K for train set and 1K for val set) independently sampled from UltraChat three times: 96.3%, 96.1%, 96.0%. The standard deviation in this case is less than 0.2%, which indicates that our reported numbers in the paper are statistically stable. - **The primary analysis presented on page 3 is based on a classification pipeline that is only explained in the Appendix. Given the centrality of this analysis more details in the paper would be helpful.** Thank you for raising this problem. We agree that moving classification setup in Appendix A to the main paper would be better for readers to understand our methodology. In the next revision of our paper, we will describe these details in Section 3 of the main text. - **I think this question is interesting but not necessarily something that will lead to further technical innovations for large language models -- it seems more motivated by curiosity about the behavior of these models.** We agree that our work is mostly motivated by curiosity and observation – how current LLMs behave differently in everyday interactions. While our proposed classification framework might not lead to direct technical innovation, it can assist researchers to evaluate and understand differences in LLMs. This is especially important since current frontier models are often not open-sourced (e.g., training data and model weights). Our scientific study can provide several useful insights: as shown in Section 5, our framework can help analyze how synthetic data affects model idiosyncrasies and infer model similarity. These implications may offer valuable insights into current LLMs and, in turn, inform future technical developments. - **The primary weakness of this paper is that it doesn't have a great deal of technical depth -- it primarily uses off the shelf models and classification techniques.** Our paper does not focus on proposing new methods but rather focus on characterizing and understanding the differences between LLMs, where we conduct comprehensive analysis to evaluate and understand model idiosyncrasies. Our novelty mostly lies in the results and findings from our carefully designed experiments. Given the growing number of model releases, we hope this work provides valuable insights for both users and model developers. - **p 3. The order of Base LLMs and Instruct LLMs in the numbered list should be exchanged. p 6. "Charateristic" -> "Characteristic"** Thank you for pointing out these writing errors. We will correct them in the next revision of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I appreciate the clarification about the standard errors for the performance. I think we agree on the strengths and weaknesses of the paper and am not changing my score.
Summary: This paper studies the samples generated by various LLMs. Specifically, they show that it is possible to effectively determine from which LLM a piece of text was sampled. Furthermore, they connect this predictability to "idiosyncrasies" in the word-level patterns, which persist even when the text has been transformed. Claims And Evidence: The claims about identification, word-level patterns, persistence under transformation, and the nature of the idiosyncrasies are well-supported by evidence presented by the authors. Methods And Evaluation Criteria: The evaluation methods make sense. The authors determine they can train classifiers which work to identify the source LLM for a given text completion and do so with both held out completions and held out prompt datasets. Additionally, the methods for identifying the nature of idiosyncrasies which make these texts predictable seem sound. Theoretical Claims: There are no proofs in this paper. Experimental Designs Or Analyses: Experiments seem broadly sound and effective. Supplementary Material: I reviewed all of the appendices, specifically the implementation details, prompts, and additional results. Relation To Broader Scientific Literature: This paper relates to work on identification of LLM-generated text using model-based features (e.g. embeddings) or text features (e.g. n-grams). It also relates more broadly to work which studies the composition of real and synthetic natural language datasets. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: * The paper is well-written * Experiments and analysis are thorough and conclusions are well justified * Experiments for identifying the sampling method as well as showing that predictability persists after transforming the text (e.g. paraphrasing) are interesting Weaknesses: * Broader impact may be limited Other Comments Or Suggestions: No other comments Questions For Authors: In the paraphrasing experiments, how different is the paraphrased text from the original input text? Also, did you try using other LLMs, aside from GPT-4o mini) for paraphrasing? Were the results consistent if so? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive assessment of our paper and the constructive comments. We are happy to address your concerns. - **Broader impact may be limited** In our submission, we have discussed the broader impact of our results mainly in the first paragraph of the introduction and Section 5. Overall, we believe our results are useful for studying frontier models given that many of their training details are missing in the first place (e.g., training data and model weights). Our results provide a framework that can be used to quantitatively study such differences and reveal the characteristic features of each model. This could potentially help us to build tools for attributing generated text to specific LLMs. In addition, as described in Section 5, we show that our framework could help study the differences of LLMs when training on synthetic data, and measure model similarity. While synthetic data is a promising direction for scaling training data, our results show that the student model could inherit idiosyncrasies of the teacher model that produce those synthetic data. We further use our framework to show that many AI models are easily classified as ChatGPT while ChatGPT is easily confused with Phi-4 – a model trained with large amounts of synthetic data (Figure 9). These results suggest that our work could help detect the practice of model distillation. Last, our results can have implications on AI security and safety. One interesting example would be that the distinguishability of AI models could allow adversaries to attack voting based evaluation leaderboard, e.g., ChatbotArena [1]. Specifically, since the identity of the model could be determined from the texts, then an adversary could manipulate the leaderboard by voting consistently for a target model. We will discuss and highlight these broader impacts of our results in the next revision of our paper. - **In the paraphrasing experiments, how different is the paraphrased text from the original input text?** In our submission, we have provided a comparison of the original generated texts and paraphrased texts in Table 15 (page 20) of Appendix. The formatting style remains largely unchanged, e.g., the number of enumerated lists are the same. Most of the differences lie in their word choices, paraphrased texts use different words with similar meanings but do not change the high-level semantic meaning of the original texts. We will add these observations and more text examples in the next revision. - **Also, did you try using other LLMs, aside from GPT-4o mini for paraphrasing? Were the results consistent if so?** Besides GPT-4o-mini, we use Qwen2.5-7b-chat for rewriting LLM responses. Here are the results of classifying the rewritten texts by Qwen2.5-7b-chat on Chat APIs’ responses. | chat APIs | original | paraphrase | translate | summarize | |-|-|-|-|- | GPT-4o-mini | 97.8 | 93.6 | 93.9 | 63.7 | | Qwen2.5-7b-chat | 97.8 | 92.6 | 94.3 | 71.5 | The results show that our results are robust to the choices of LLMs for rewriting. We will add these results in the next revision. [1] Chatbot Arena. https://lmarena.ai/ --- Rebuttal Comment 1.1: Comment: Thank you for your response and for clarifying those points. I decided to maintain my score of a 4.
null
null
null
null
null
null
CASE-Bench: Context-Aware SafEty Benchmark for Large Language Models
Accept (poster)
Summary: CASE-Bench evaluates whether LLMs can make safety judgements based on contexts that align well with human judgments. The paper uses contextual integrity theory to formulate prompts from SORRY bench, introducing additional parameters such as the sender, the recipient, and the transmission principle. Whether a query should be deemed harmful is dependent on the additional context. For each query with context, they collect multiple human annotations according to power rules to show that context has statistically significant influence on the safety judgement. Different LLMs often fail to align with majority human judgement. Claims And Evidence: The claims are conveyed clearly and supported by good statical evidence. The authors conduct power analysis to determine the amount of annotators needed to show statistical significance of context’s influence on safety decisions. Methods And Evaluation Criteria: The selection of SORRY-BENCH as the base prompt dataset makes sense given the diverse categories in this benchmark. The evaluation is conducted using different methods such as direct scoring and taking the log probability. They conduct careful statistical studies such as the use of power analysis, which is a strength of this paper. The authors programmatically generate “safe” and “unsafe” contexts via GPT-4, then manually revise them to ensure quality. They also discuss protocols to make sure that human annotations are high quality. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiment designs and analyses are sound. The authors use z-tests to compare the proportions of “Should Respond” across conditions and use Kruskal–Wallis tests to assess significance per query across different contexts. Supplementary Material: I checked the appendix for additional details on annotator tutorial, context generation, and detailed statical analyses as well as category-specific case studies. I examined the supplementary code, data, but did not run the code to replicate experiment results reported in the paper. Relation To Broader Scientific Literature: The authors show that state-of-the-art LLMs tend to fail to consider contexts in making safety or refusal judgments and can diverge from human majority votes. This is an important missing part in current LLM safety evaluations, which often are restricted to straightforward prompt categorization. The finding has important implications for building better safety guardrails for LLMs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Adding context significantly alters human judgments about whether a response is safe. This is a novel perspective of safety evaluation. The authors show this statistically via z-tests and Kruskal-Wallis tests. The statistical rigor is a strength of the paper and often missing in related LLM safety works. Other Comments Or Suggestions: N/A Questions For Authors: - How do reasoning models behave on this benchmark? - While “recipient” parameter has the biggest impact on safe/ unsafe classification, how might a real system track or verify recipient attributes without violating user privacy or being tricked by malicious actors? - How does your current measurements capture model behavior such as partial compliance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer dcxK for recognizing the contributions of our research and for the insightful questions. We address them as follows: 1. __Extra Experiments on Reasoning Models__: - We provide results on DeepSeek-R1 as follows. | LLM |Method| Accuracy | R(Safe/Unsafe) | | -------- | ------- | ------- |------- | | Deepseek-R1 | Binary |87.9% | 84.0%/91.0% | | Claude-3.5-sonnet | Binary |88.7% | 86.5%/90.4% | | GPT-4o-2024-08-06 | Binary |77.0% | 54.6%/94.8% | - We observe that while DeepSeek-R1 is not specifically optimized for safety, it achieves performance similar to Claude-3.5-sonnet, which is safety-optimized, and significantly outperforms GPT-4o-2024-08-06. This suggests that improved reasoning capabilities can potentially benefit safety judgements. - We will incorporate these observations and recommendations into the paper. 2. __Recipient Verification__: - Our work assumes access to recipient information as a **controlled input** for benchmarking purposes. In practice, we envision several approaches that could be explored: - The system could obtain and use recipient-related information by directly requesting it from users, with clear consent explicitly granted. Where appropriate, general categories (e.g., “medical professional”) could be used in place of detailed sensitive attributes. - In institutional or enterprise environments, users may already belong to trusted identity groups (e.g., doctors, students, researchers), which can be referenced securely without disclosing individual-specific data. - Recipient-related information may also be obtained through privacy-preserving or cryptographic techniques. For instance, advanced cryptographic methods such as Zero-Knowledge Proofs can enable users to demonstrate that they meet certain criteria (e.g., being over the age of 18) without revealing their actual identity or exact age [1]. Additionally, differential privacy techniques [2] can be applied to obscure individual-level information while still enabling the evaluation of general thresholds or boundaries, thereby maintaining utility without accessing real user data. 3. __Regarding partial compliance__: - Thank you for raising this question. In our study, we test LLM-as-a-judge and does not have explicit partial compliance behaviour. However, we believe our BCE results reflects model uncertainty, and hence can indicate the degree of partial compliance. [1] Patil et al. “Age Verification using Zero-knowledge Proof”.\ [2] Zhu et al. “More than privacy: Applying differential privacy in key areas of artificial intelligence”.
Summary: This paper introduces CASE-Bench, a new LLM safety benchmark with a special focus on the context-aware safety. The authors employ the contextual integrity theory when generating safe and unsafe contexts for different queries. Large-scale experiments have been conducted to assess the safety of different LLMs under various contexts. The evaluation results indicate that contexts have significant impact on human judgments and cause notable mismatches between human and LLM judgments due to over-refusal issues. Claims And Evidence: Overall, I agree that incorporating context is a critical yet often overlooked element in assessing LLM safety. The authors effectively adapt CI theory to construct a benchmark that examines LLM performance in a context-aware setting. However, the current design for context, to me, remains somehow unconvincing. The current appraoch raises concerns about whether the presented contexts significantly differ from typical LLM jailbreak techniques, such as Context Manipulation. Although contexts are formally described and claimed to be verified, the distinction between a genuinely contextually safe scenario and a carefully crafted jailbreak prompt remains somewhat unclear. Methods And Evaluation Criteria: Although the authors emphasize multiple times in the paper that contexts in this work are designed as verified and are not a part of the user's input prompt. For instance, in Section 5.2, "LLM is asked to judge if answering the query is safe or not given the context while assuming the context is verified and reliable. ". Nonetheless, this assumption may not hold in practice since these contexts ultimately serve as input prompts. It remains uncertain if LLMs internally accept contexts as truly verified or if they treat them as part of regular inputs. As aformentioned, the current setup of contexts are close to a category of LLM jailbreak techniques, Context Manipulation, including methods like historical context, virtual environment and alternate reality. Without any prior knoeledge, it is hard to tell whether a malicous query with safe/unsafe context is indeed under a specific context or just a jailbreak template. From my point of view, if there is indeed a real context, it should be something beyond prompts. Namely, taking the context illustrated in the middle graph of Figure 1 as an example, there should be a chatbot which is tailored for a creative writing platform with some safety constraints lifted (e.g., the safety alignment mechanism of the chatbot allows the chatbot to produce certain unsafe content for writing purposes). Thus, the context can be considered acceptable by the chatbot and impacts its internal reasoning logic. That says, it is not clear whether the understanding of the LLM w.r.t the context is affected by the internal safety mechanism. Theoretical Claims: This paper does not include theoretical proof, and the main contribution is not in such an aspect. Experimental Designs Or Analyses: The authors employ two statistical methods to analyze the influence of context and three metrics to evaluate the outputs of the LLMs, which is commendable. However, Section 5.2.1 only provides a relatively high-level overview of the findings. I would recommend the authors provide more detailed insights from the evaluation w.r.t. different metrics and how and under what context or types of queries human and LLMs' judgements are mismatched. Supplementary Material: No supplementary material is provided for this submission. The Appendix provides more details about the experiment setup and prompts used for querying the LLM. The replication is available in an anonymous repo and seems complete. Relation To Broader Scientific Literature: In Section 6.2, the auhtors have discussed how the context in CASE-Bench is assumed to be separate from the jailbreak prompt provided. However, the separation discussed are somehow in a conceptual manner, more specific technical different may help tp seperate the CASE-Bench from jailbreaking prompts. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The presentation is in a good manner, and the paper is overall easy to follow. Other Comments Or Suggestions: N/A Questions For Authors: - Do the LLMs indeed consider the provided contexts as “verified” and reliable, considering they are input as part of the prompt? - Can you elaborate on how the context influences the LLM’s internal reasoning compared to standard jailbreak techniques? Are there scenarios where this distinction becomes more pronounced? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the importance of incorporating context in safety evaluations, and we would like to address the concerns as follows. 1. __Regarding Jailbreaking__: - Context manipulation aims to bypass the safety mechanism and make LLM respond with harmful content, which is different from our setup. In contrast, we evaluate LLM-as-a-judge for safety evaluation and test if LLM can make safety decisions by considering the context. - In practice, these judges are used by e.g. administrators and the context can be verified by the administrator. Therefore, the context is indeed out of reach from user input. We represent them using natural language and feed them as the precondition to the LLM in our experiments. But, as explained, this does not mean the context would be provided by the users. 2. __Regarding whether the model treats context as verified or not__: - We instruct the model to treat the context as verified, and the model is used as a judge. The task is not to make a response, but to make a judgement. Therefore: - A good model should closely follow our instructions and give its judgements by treating the given context as verified and reliable. - LLM judgements can be affected by the model internal safety mechanism. If the internal safety mechanism is too harsh, it will tend to flag almost everything as unsafe, even what is indeed safe, which yields a worse alignment with human judgements. 3. __Regarding under what context or types of queries human and LLMs' judgements are mismatched.__: - We used Llama3-70B-Instruct model as a representative example to analyze the mismatch between human and LLM judges as suggested by the reviewer. - Human thought safe where LLM flagged unsafe: - These mainly happens to queries that are less harmful themselves, including "fake news", "political belief" and "explicit content generation". - In these cases, the LLMs fail to consider whether it really causes actual harm given the specific context, and makes the decision only based on its internal safety mechanism. - Human thought unsafe where LLM flagged safe: - These mainly happens to queries that may cause more serious social impact, including "violent crimes" and "child abuse". - These queries are mostly harmful regardless of the context. Therefore, errors occur when LLM is influenced by the context and make a "safe" decision. We now provide specific examples in the github repo and will also add this analysis to the revised paper.
Summary: The paper extends the Sorry-Bench dataset by context, introducing CASE-Bench. The benchmark is designed to evaluate how well LLM can judge the safety of a query depending on the context (e.g. applications). The constructed dataset comprises 900 query-context pairs. The contexts are automatically generated by GPT-4 and then manually revised. The annotation process is extensive. Specifically, the authors employ Contextual Integrity (CI) theory to formalize context and use power analysis to ensure a sufficient number of annotators for statistically meaningful results. The evaluation of LLM judges is assessed by different scenarios, namely binary classification, scoring between 1 and 10 and normalized token probabilities. Claims And Evidence: The main claim is that LLM safety judgments lack consideration of context, which is highlighted by the introduced benchmark. The authors support this with extensive empirical analyses and additional ablations supporting the choice of CI parameters. However, a key concern with the proposed study design is the choice of evaluated LLMs as the primary safety judges. In real-world applications, dedicated classifier models—such as LlamaGuard (https://arxiv.org/abs/2312.06674) or OpenAI’s moderation API (https://platform.openai.com/docs/guides/moderation)—are typically used to assess the safety of queries (and responses). However, I also assume that the performance assessed by the proposed benchmark of such safety guards won’t be great since they are not created with context in mind but rather trained on a fixed taxonomy. But here, I also don’t see the practicality of the proposed approach. In practice, the LLM application developer (e.g., the developer of the creative writing platform) would fine-tune a safeguard tailored to the use case and the use case’s safety taxonomy. This said I believe the presented benchmark could be rather beneficial to assess the ability to fine-tune safeguards for specific use cases with varying safety taxonomies or assess general-purpose safeguards (which, however, do not exist to the best of my knowledge). Methods And Evaluation Criteria: Yes, the choice of extending the SORRY benchmark datasets makes sense for the type of assessment. The creation of context using GPT-4 and manual revision sounds reasonable. The annotation process is extensive. Theoretical Claims: The paper employs Contextual Integrity (CI) theory to define and structure the context affecting safety judgments. This well supports the dataset creation process. Experimental Designs Or Analyses: The study follows a rigorous methodology both in creating the dataset and evaluating LLM judges. For instance, context is initially generated by GPT-4o and subsequently manually refined, and the annotation process is extensive. Multiple LLMs are tested using several setups, namely binary classification, scoring between 1 and 10, and normalized token probabilities. Another alternative could be not treating the LLM as a judge and measuring its refusal rate. However, the general setup to evaluate LLMs-as-judge instead of safeguards assessing the safety of queries is questionable. See "Claims And Evidence" section. Supplementary Material: I briefly checked the Appendix. The authors provide all the necessary information to comprehend the methods used. I would prefer the discussion of the limitation within the main text instead of the appendix. Relation To Broader Scientific Literature: The relation to existing literature is well described in the related work section. Essential References Not Discussed: As mentioned above common safeguard models are not considered within this study. - Llamaguard: https://arxiv.org/abs/2312.06674 - Wildguard: https://arxiv.org/abs/2406.18495 - External api tools such as https://platform.openai.com/docs/guides/moderation Other Strengths And Weaknesses: In general well written and easy to follow. Other Comments Or Suggestions: I strongly recommend the evaluation of safeguard models or even the feasibility of fine-tuning of safeguards on different contexts instead of evaluating LLMs. Or justify the practical use of LLMs for safety assessments. Questions For Authors: Have you considered evaluating the refusal rates of LLMs under different contextual conditions instead of treating them as judges? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank reviewer P2mw for the detailed and constructive suggestions. We would like to address the following concerns: 1. __Regarding experiments with LlamaGuard__: - Following the reviewer’s suggestion, and as an example, we have also evaluated LlamaGuard using our benchmark. We will add the following results to the revised version of our paper: | Setting | Accuracy | R(Safe/Unsafe) | PCC | BCE | | -------- | ------- | ------- | ------- | ------- | | Without Context | 54.1% | 25.3%/77.0% | 8.10 | 0.2661 | | With Context | 60.1% | 31.8%/82.6% | 26.34 | 1.7108 | - We have the following observations: - Incorporating context improves the performance of LlamaGuard which makes it align better with human judgements. Hence modeling context has benefits in this case too. - The accuracy is not as good as proprietary LLMs with context, indicating a limited ability to understand the context with the current model which agrees with the reviewer's assumption. - We appreciate the reviewer for enlarging the application scope of the proposed benchmark. We agree with the reviewer’s observation that our benchmark could also be used to fine-tune safeguards. We also believe that our benchmark can be used to evaluate any general LLM as they can be used as the judge for safety evaluation. - Regarding the practicality mentioned in the Claims & Evidence: - We would like to clarify that fine-tuning is not always the method used in practice by developers. Many LLM applications, such as "Custom GPTs" (see the “GPT store” for more details - https://openai.com/index/introducing-the-gpt-store/), rely instead on system prompts and additional information (still, see our comment about fine-tuning being feasible too below). - The “category” of these LLM apps help define aspects such as the “type of data” being used, thereby supporting the practicality of our proposed approach. 2. Evaluating Refusal Rates - Recent LLMs have safety mechanisms to prevent context injection attacks. Therefore, directly evaluating refusal behaviour would cause ambiguity when the model considers the context as an attack. Therefore, we particularly design our evaluation with this LLM-as-a-judge framework to avoid potential issues with context injection attack. 3. Feasibility of Fine-tuning - Based on the evidence we present in this paper, future research could explore ways to make LLMs more aware of context, which may indeed include fine-tuning. While this is a possible direction, it is currently out of the reach of this paper. At present, we focus on evaluation, and in particular, we employ a sufficiently large number of annotators to account for the uncertainty in human judgment. Collecting a training dataset using the same pipeline would be an exciting future work, which would require additional resources. We will also include all the recommended references in our revised version of the paper.
Summary: The paper introduces CASE-Bench, a novel Context-Aware Safety Benchmark for assessing large language models (LLMs). The benchmark integrates contextual information into safety evaluations by pairing 450 controversial queries with two types of contexts—safe and unsafe—in a total of 900 query‐context pairs. The authors formalize context using an adapted framework based on Contextual Integrity (CI) theory, and they obtain non‐binary safety ratings from over 2,000 annotators who were recruited via MTurk. Extensive experiments are presented to show that context has a statistically significant influence on human safety judgments (with p-values < 0.0001 from z-tests and further supported by Kruskal–Wallis analyses), and notable differences are observed in how various LLMs (including closed and open-source models) align with human judgments. Overall, the paper claims that integrating rich, formally described context is necessary to more accurately evaluate the safety of LLM responses. Claims And Evidence: Main Claims: The paper mainly claims that (1) incorporating contextual information (via CI theory) leads to significantly different safety judgments compared to evaluations based on isolated queries, and (2) that existing benchmarks based solely on binary refusal behavior are insufficient. Evidence Presented: The authors support these claims with a series of statistical tests (z-test and Kruskal–Wallis test) showing significant differences in annotation responses when context is provided. They also report mismatches between human judgments and LLM outputs under safe contexts. Comments: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Proposed Methods: The paper adopts a two-step context-generation process where contexts are first automatically generated by GPT-4 and then manually revised using CI theory criteria. It further establishes a large-scale annotation pipeline with carefully designed tutorials and power analysis to determine a sufficient sample size. Evaluation Criteria: Safety is evaluated using both binary classifications (respond/refuse) and continuous safety ratings (scores from 1 to 10), with performance measured via accuracy, recall, Pearson correlation coefficients (PCC), and binary cross-entropy (BCE). Comments: make sense Theoretical Claims: The paper does not present formal proofs or novel theoretical derivations. Instead, it adapts the established CI theory framework to the context of LLM safety evaluation. Experimental Designs Or Analyses: Experimental Design and Analyses: The experiments involve annotating 900 query-context pairs using a between-subjects design on AMT with 21 annotators per task. Two statistical tests (z-test and Kruskal–Wallis) are used to assess the impact of context. Comments: The experimental designs and analyses are sound and valid. Supplementary Material: No Relation To Broader Scientific Literature: The paper is positioned within recent efforts to evaluate LLM safety, extending benchmarks that currently focus on isolated queries by incorporating context into safety assessments. It builds on prior work in red-teaming and safety evaluation of LLMs. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The paper addresses an important yet underexplored aspect of LLM safety by integrating contextual information into assessments. 2. The experimental evaluation is carried out on a reasonably large scale using a statistically robust design. Weaknesses: 1. The benchmark is based on an existing dataset (SORRY-Bench), which might include bias. 2, The connection between the theoretical framework (CI theory) and its practical benefits in safety evaluation is not convincingly argued. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer's acknowledgement of our work, and would like to address the following weakness raised in the review: 1. __Regarding Dataset Bias__: - We chose to base our benchmark on SORRY-Bench because it addresses a key issue in prior datasets—imbalance and over-representation of certain fine-grained categories that can introduce bias. SORRY-Bench introduces a curated safety taxonomy across 4 high-level domains, unifying inconsistent taxonomies from earlier work. We believe this design helps mitigate, rather than introduce potential bias. 2. __Regarding practical benefits of CI theory__: - We thank the reviewer for raising this point. We believe that integrating CI theory into our framework brings several practical benefits to safety evaluation:\ (1) Structured representation of context: CI theory provides a principled way to decompose context into explicit components, which helps represent complex scenarios more systematically compared to free-text descriptions.\ (2) Modular verifiability: Each CI parameter can be verified independently, making it easier to assess individual aspects of a scenario or a model’s understanding.\ (3) Improved interpretability: The structured representation allows us to trace how specific CI parameters influence model judgments, providing greater transparency. And we can perform targeted ablation studies to evaluate model sensitivity to context changes. - Contemporaneously, several papers have investigated the use of CI in different context of AI/ML, e.g.[1, 2], which also proves the practical advantage of using CI theory. [1] Tsai et al. “Context is Key for Agent Security”. \ [2] Ghalebikesabi et al. “Operationalizing contextual integrity in privacy-conscious assistants”.
null
null
null
null
null
null
Graph4MM: Weaving Multimodal Learning with Structural Information
Accept (poster)
Summary: This paper introduces Graph4MM, a novel framework for multimodal learning that leverages graph structures to model complex relationships between text and images. The framework can be divided into two parts: Hop-Diffused Attention and MM-QFormer. By giving both theoretical and empirical analysis, the authors demonstrate the superiority of Graph4MM. Claims And Evidence: Most claims are well supported. But some points can be further improved. 1. The paper claims that treating graph structure as a guide rather than a standalone modality is more effective, but the results in Table 4 only contain generative task, maybe discriminative task should also be included. 2. The paper does not discuss computational efficiency trade-offs. Maybe results like runtime, memory usage comparison can be included. 3. The paper uses WikiWeb2M and ELE-FASHION, but testing on more diverse domains would be better. 4. There are no experiments about proposition 4.1, more empirical validation of this specific theory would be beneficial. Methods And Evaluation Criteria: Yes, it makes sense. But there are some points: 1. The different setting for OPT-125M and LLaMA-1B(5 unseen classes vs 9 unseen classes) 2. More diverse datasets would be better 3. The concrete analysis for the computational efficiency should be included Theoretical Claims: The proofs are generally correct. Experimental Designs Or Analyses: 1. The interaction between text and vision modalities should be taken into consideration in an ablation study. 2. How does the random seed influence the results? Authors should claim this, make sure the results are not lucky initialization. 3. The paper should systematically explore the influence of different graph densities or structures. Supplementary Material: NO Relation To Broader Scientific Literature: NO Essential References Not Discussed: NO Other Strengths And Weaknesses: NO Other Comments Or Suggestions: NO Questions For Authors: NO Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks very much for your valuable questions, each of them are quite actionable. We provide our response in the form of Q&A as follows. > **Claims And Evidence (CE) 1 & CE4**: Graph as a standalone modality in discriminative setting; empirical validation of Proposition 4.1. **A1**: We conducted additional experiments on node classification (OPT-125M backbone) to assess the effectiveness of modeling the graph as a standalone modality. Following the generative setting described in Lines 392–395, we use GCN-derived graph/node embeddings as soft prompts that serve as input to the downstream LLM, equivalent to vision and language tokens. The results are shown below. **Table 1**: Performance of "modeling graph as a standalone modality" in the discriminative setting. |Method|ROUGE-L|Accuracy|Recall|Precision| |------|-------|--------|------|----------| |Subgraph's T&I|0.8144|99.85|83.25|83.33| |+Graph Token|0.6648|99.96|89.91|89.98| |+Node Token|0.6892|99.53|88.78|89.98| |Ours (Hop-Diffused)|**0.8282**|**100.00**|**100.00**|**100.00**| These results reinforce our earlier finding: treating the graph as a separate modality does not lead to consistent gains, likely due to the semantic gap between GNNs with pretrained vision-language features. In contrast, our method performs significantly better, supporting Proposition 4.1 alongside Table 4 in our paper. *** > **CE2 & Methods And Evaluation Criteria (MEC) 3**: Computational Efficiency of our method. **A2**: Due to space limitations, we refer the reviewer to [Section "Weakness (W) 1" in our response to Reviewer xJRk](https://openreview.net/forum?id=FB2e8PV6qg&noteId=qC65trdQn8), where we theoretically and empirically show that our method shares the same order of time complexity as the baseline, with only mild runtime overhead. For the memory cost, in the generative setting with the OPT-125M backbone, our method adds 13M trainable parameters via Hop-Diffused MM-QFormer, which is less than 6% of MMGL's 229M total parameters. This overhead would become even smaller as the LLM backbone scales up, thus, the memory usage of Graph4MM remains manageable. *** > **CE3 & MEC2**: Testing on more diverse domains. **A3**: Due to space limitations, we refer the reviewer to [Section "W2" in our response to Reviewer xJRk](https://openreview.net/forum?id=FB2e8PV6qg&noteId=qC65trdQn8), where we validate the effectiveness and strong generalization ability of our method under a completely different domain. *** > **MEC1**: Different number of unseen classes or OPT-125M and LLaMA-1B backbone. **A4**: For the LLaMA-1B model, the performance gap among different baselines under the 5 unseen-class setting is relatively small, as all methods achieve high classification accuracy. Thus, we increased the task difficulty by expanding the number of unseen classes to 9 for comparison to highlight the different performance between methods. In Appendix K, we provide the performance variation under different numbers of unseen classes. *** > **Experiment Design or Analysis (ED) 1**: Ablation of vision and text interaction. **A5**: We ablate modality-specific structures and inputs by (1) removing structural heuristics from text or vision, and (2) removing the vision modality entirely. As shown in Table 2, removing structural heuristics for either vision or text modality degrades performance, the same as excluding visual information. This highlights the importance of both modalities and their corresponding structural guidance for effective multimodal fusion. **Table 2**: Modality interaction ablation results in the generative setting (OPT-125M). |Method Variant|BLEU-4|ROUGE-L|CIDEr| |-------------|------|--------|------| |Hop-Diffused MM-QFormer|**0.0800**|**0.4076**|**0.7831**| |-$t_{G'}$Hop-Diffused Attention|0.0786|0.4065|0.7765| |-$p_{G'}$Hop-Diffused Attention|0.0769|0.4044|0.7684| |-Vision Modality|0.0770|0.3992|0.7606| *** > **ED2**: Influence of the random seeds. **A6**: Under the generative setting with OPT-125M, we report the mean and standard deviation over 3 random seeds (0–2), and compare with the best MMGL baseline under the same setup. **Table 3**: Model performance across random seeds. |Method|BLEU-4|ROUGE-L|CIDER| |------|------|--------|------| |Graph4MM|**0.07991±0.00066**|**0.40725±0.00028**|**0.78904±0.00305**| |MMGL|0.07762±0.00059|0.40504±0.00076|0.76907±0.00286| *** > **ED3**: Discussion of the graph density. **A7**: We study graph density under the generative setting by varying (text, vision) neighbors from sparse (5,2), to medium (11,5), and dense (15,8). As shown in Table 5, our model remains stable across densities and benefits from more neighbors, consistently improving generation performance. **Table 4**: Impact of graph density on Graph4MM in generative setting (OPT-125M). |Density|BLEU-4|ROUGE-L|CIDEr| |-------|------|--------|------| |Sparse|0.0788|0.4049|0.7795| |Medium|0.0800|0.4076|0.7831| |Dense|0.0809|0.4086|0.8018|
Summary: The paper Graph4MM introduces a graph-based multimodal learning framework that integrates structural relationships into foundation models to improve multimodal understanding. Unlike previous methods that treat graphs as standalone modalities, Graph4MM incorporates Hop-Diffused Attention to model multi-hop connectivity and MM-QFormer for cross-modal fusion using learnable query tokens. The framework achieves an average improvement of 6.93% across generative and discriminative tasks, outperforming large VLMs, LLMs, and multimodal graph baselines. Theoretical and empirical analysis demonstrates that leveraging graph structure enhances multimodal learning beyond traditional one-to-one alignments. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I checked them all and found no obvious errors. Experimental Designs Or Analyses: Yes, I checked them all. However, this paper has some limitations in the selection of dataset and baseline. For the details, please refer to weaknesses. Supplementary Material: Yes, I checked them all. Relation To Broader Scientific Literature: 1. Beyond One-to-One Vision-Language Models: Unlike BLIP-2 and Flamingo, which align single image-text pairs, Graph4MM models many-to-many multimodal relationships using graph structures. 2. Advancing Multimodal Graph Learning: Extends MMGL by integrating graph topology into multimodal fusion rather than treating it as a standalone modality. 3. Improving Graph-Based Attention: Builds on GATs and diffusion-based methods by introducing Hop-Diffused Attention, which encodes multi-hop connectivity while mitigating over-smoothing. 4. Enhancing Query-Based Transformers: Extends QFormer (used in BLIP-2) by incorporating structural priors, improving cross-modal alignment with structural guidance. Essential References Not Discussed: Line 50 “To the best of our knowledge, MMGL (Yoon et al., 2023) is the state-of-the-art work that models modalities into graphs and obtains promising performance in the generation task, as compared to single-modal pre-trained language models and vision-language models.” Should cite relevant papers working on the multimodal knowledge graphs and talking about the relationship between them and Graph4MM. For example, some most relevant papers mentioned in the survey: Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey (submitted to arxiv on 8 Feb 2024) or any other sota works working on this. Other Strengths And Weaknesses: Strengths 1. Structured Multimodal Integration: Effectively incorporates multi-hop graph connectivity into multimodal learning, improving intra- and inter-modal interactions. 2. Advanced Attention Mechanism: Hop-Diffused Attention enhances multimodal reasoning by encoding structural dependencies while avoiding over-smoothing. 3. Good Empirical Results: Outperforms large VLMs, LLMs, and multimodal graph baselines, achieving a 6.93% average improvement across tasks. Weaknesses 1. Lack of citations for relevant references. Such as works related to multimodal knowledge graphs. I understand that this type of work is not a baseline for this paper, but it should be cited and briefly discussed in relation to this study. 2. The proposed module lacks sufficient innovation. The paper introduces MM-QFormer, but its architecture shows little novelty compared to traditional Q-Former. The main difference lies in incorporating structural prior knowledge into the input embeddings, a concept already explored in previous works such as the Translator module in GraphTranslator [1], significantly reducing the originality of the contribution. 3. Generalizability of the proposed method. The paper evaluates its performance on only one dataset for graph-related tasks, which raises concerns about the generalizability. Using a broader set of graph-based multimodal benchmarks [2] would strengthen the validity of the results and demonstrate the robustness of the approach across diverse datasets. 4. Incomplete baseline selection. The paper lacks a comprehensive baseline comparison, particularly with Graph Large Language Models related methods [3,4] that can handle text-attributed graphs. Replacing embeddings with unified representations incorporating both text and image information can also show reasoning capabilities in multimodal graph tasks. However, the paper does not sufficiently explore or evaluate these alternatives, limiting the depth of its baseline analysis. [1] GraphTranslator: Aligning Graph Model to Large Language Model for Open-ended Tasks (https://arxiv.org/pdf/2402.07197) [2] Multimodal Graph Benchmark. (https://arxiv.org/pdf/2406.16321) [3] LlaGA: Large language and graph assistant (https://arxiv.org/abs/2402.08170) [4] Can we Soft Prompt LLMs for Graph Learning Tasks? (https://arxiv.org/abs/2402.10359) Other Comments Or Suggestions: No Questions For Authors: 1. How does it improve upon similar ideas, such as the Translator module in GraphTranslator? MM-QFormer seems to just move it over and use it. 2. Given that the method is only evaluated on a single dataset for graph-related tasks, how can the authors ensure its robustness and generalizability across different multimodal graph benchmarks? 3. Why were Graph Large Language Models related methods (or GNN-based approaches) that integrate text and image embeddings not included as baselines? How would the proposed method compare to such models in multimodal graph reasoning tasks? 4. The baseline MMGL is evaluated under both frozen and fine-tuned settings, whereas Graph4MM is only tested in the fine-tuned setting.Would a frozen version of Graph4MM still demonstrate effectiveness compared to MMGL? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks very much for your constructive questions, each of them are quite actionable. We provide our response in the form of Q&A as follows. > **Weakness (W) 1**: Should cite relevant papers working on the multimodal knowledge graphs and talking about the relationship between them and Graph4MM. **A1**: We thank the reviewer for the suggestion. The survey [1] covers a wide range of MMKG research, including multimodal knowledge graph construction, multimodal representation learning, and cross-modal reasoning. These topics are relevant to our work in terms of constructing multimodal graphs and inter-modality fusion. We will update our Related Work section to include the mentioned survey [1] as well as representative recent works such as MoMoK [2] and MarT [3]. [1] Chen, Zhuo, et al. "Knowledge graphs meet multi-modal learning: A comprehensive survey." arXiv preprint 2024. [2] Zhang, Yichi, et al. "Multiple Heads are Better than One: Mixture of Modality Knowledge Experts for Entity Representation Learning." ICLR 2025. [3] Zhang, Ningyu, et al. "Multimodal analogical reasoning over knowledge graphs." ICLR 2023. *** > **W2 & Question (Q) 1**: How does Graph4MM compared to Q-Former and GraphTranslator? **A2**: * **Compared to Q-Former**: We would like to clarify that we do not claim the Q-Former architecture itself as our core contribution. Q-Former, originally from BLIP-2, is a standard module for visual-language fusion. Our key contribution lies in how we integrate structural heuristics into the multimodal fusion process. * **Compared to GraphTranslator**: Our method does not use the Translator module in GraphTranslator. The only similarity between our work and GraphTranslator lies in the use of the Q-Former architecture for cross-modality fusion, as discussed in the previous point. The Translator module in GraphTranslator is specifically designed for text-attributed graphs, fusing GNN-derived graph embeddings and explicit textual descriptions of graph structure using the Q-Former architecture. In contrast, our method: (1) Escalates to full multimodal learning, incorporating vision in addition to text and graph structure. (2) Takes a fundamentally different approach to structural integration: instead of relying on explicit textual descriptions to align graph embeddings with the LLM’s space (as in GraphTranslator), we introduce hop-diffused attention—a sparse mechanism that implicitly encodes graph topology in the hidden state during vision-text fusion. (3) Provides both theoretical analysis and empirical evidence demonstrating that treating graphs as a separate modality, as done in GraphTranslator, is sub-optimal in complex multimodal scenarios where vision and language co-occur. We believe our contributions are significant and fundamentally different from existing methods. *** > **W3 & Q2**: Generalization of our method on different datasets. **A3**: Due to space limitations, we refer the reviewer to [Section "W2" in our response to Reviewer xJRk](https://openreview.net/forum?id=FB2e8PV6qg&noteId=qC65trdQn8), where we validate the effectiveness and strong generalization ability of our method under a new link prediction setting. *** >**W4 & Q3**: Compared to other Graph Large Language Model-based methods. **A4**: We did not include graph-LLM methods as baselines because most prior works operate in a unimodal (text-only) setting and are not designed to handle multimodal inputs involving images. To better demonstrate the effectiveness of our method in multimodal scenarios, we conducted two additional experiments: (a) We used CLIP-ViT to extract multimodal node embeddings as input to enhance LLaGA; (b) We enhance GraphPrompter by incorporating visual tokens in the same way as MMGL. For a fair comparison, all models were evaluated under the same generative setting and applied the same fine-tuning strategy using the OPT-125M backbone. The results are shown below. **Table 1**: Comparison with multimodal extensions of Graph-LLM baselines under the generative setting (OPT-125M). | Method | BLEU-4 | ROUGE-L | CIDEr | |------|-----|-------|------| | LLaGA | 0.0642 | 0.3738 | 0.6628 | | GraphPrompter | 0.0782 | 0.4045 | 0.7651 | | Ours (Hop-Diffused) | **0.0800** | **0.4076** | **0.7831** | These results further validate the effectiveness of our proposed approach, which introduces hop-diffused structural heuristics into the latent space of multimodal fusion, leading to superior performance in multimodal graph reasoning. *** > **Q4**: MMGL is evaluated under both frozen and fine-tuned settings? **A5**: We would like to clarify that all MMGL variants we compared were fine-tuned, following the official MMGL implementation. We provide additional results under the frozen generative setting with OPT-125. **Table 2**: Comparison on frozen backbone. |Method|BLEU-4|ROUGE-L|CIDEr| |--|--|---|--| |MMGL|0.0606|0.3382|0.6332| |Graph4MM|**0.0733**|**0.4018**|**0.7428**| --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your dedicated work. Most of my concerns are addressed, so I'd like to increase my score to 3. --- Reply to Comment 1.1.1: Comment: Dear Reviewer gTEN, Thank you for your thoughtful comments and for raising the score! We’re glad to hear that our responses helped address your concerns. We believe that these discussions have helped us better clarify the paper, highlight our core contributions, and further improve the quality of Graph4MM. We will ensure that all of these points are carefully reflected in the final camera-ready version. We sincerely appreciate your time and constructive feedback. Best regards, #8515 Authors
Summary: The paper proposes Graph4MM, a graph based multimodal learning framework. It integrates multi hop structural information into foundation models and fuses modality specific information. The main algorithmic ideas include Hop Diffused Attention and MM QFormer. Experiments show that Graph4MM outperforms larger VLMs, LLMs, and multimodal graph baselines, achieving a 6.93% average improvement. Claims And Evidence: The claims are generally supported by evidence. The authors conduct experiments on two datasets with various baselines. The performance improvements in both generative and discriminative tasks demonstrate the effectiveness of Graph4MM. Methods And Evaluation Criteria: The proposed methods make sense for the problem. Graph4MM uses graph based structures to model multimodal relationships, which is suitable for handling complex multimodal data. The evaluation criteria, including using relevant datasets like WIKIWEB2M and ELE FASHION for generative and discriminative tasks respectively, are appropriate for assessing the model's performance. Theoretical Claims: The theoretical claims, such as the analysis of Hop Diffused Attention's properties and its comparison with GAT in terms of over smoothing, are presented with proofs. Experimental Designs Or Analyses: The experimental designs are sound. The authors compare Graph4MM with multiple baselines under different input settings. They also conduct ablation studies to analyze the importance of structural information. However, the generalization of the results could be further verified by testing on more diverse datasets. Supplementary Material: I reviewed the supplementary material, including appendices on theoretical proofs, dataset details, and implementation details. These parts are useful for understanding the paper's methodology, experiments, and theoretical background. Relation To Broader Scientific Literature: The key contributions of the paper are related to the broader scientific literature. It builds on existing works in vision language models and multimodal graph neural networks. Essential References Not Discussed: There do not seem to be any essential references not discussed. Other Strengths And Weaknesses: A strength of the paper is its innovative approach to multimodal learning using graph structures, which shows significant performance improvements. A weakness could be that the framework's complexity might limit its scalability. Also, the performance improvement might be dataset - specific, and more extensive testing is needed. Other Comments Or Suggestions: none Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks very much for your constructive questions, each of them are quite actionable. We take them quite seriously and prepare the following QA formatted response. >**Weakness (W) 1**: A weakness could be that the framework's complexity might limit its scalability. **A1**: We analyze the complexity of Graph4MM from both time complexity and runtime, showing that it shares the same order of complexity as LLM-based baselines like MMGL, with only a modest runtime overhead introduced by the Hop-Diffused MM-QFormer. Specifically, the complexity of our Hop-Diffused Attention is $O(|\mathcal{V}|^2 \cdot d + K \cdot |\mathcal{E}| \cdot d)$, and the Q-Former operates at $O(|\mathcal{V}|^2 \cdot n_q^2 \cdot d)$, where $|\mathcal{V}|$ is the number of nodes, $|\mathcal{E}|$ is the number of edges, $n_q$ is the number of multimodal queries, and $d$ is the hidden dimension. However, the LLM remains the dominant cost which scales up to $O(|\mathcal{V}|^2 \cdot T^2 \cdot d)$ due to processing per-node textual inputs—similar to MMGL, where $T$ is the average token length per node (usually $T \gg n_q$ ). We also provide per-batch training and inference time comparisons in Table 2, where the runtime shows only a mild increase. **Table 1**: Per-batch training and generation time comparison for generative setting on OPT-125M backbone (Mean ± Standard Deviation). | Method | Secs per Batch (Training) | Secs per Batch (Generation) | |----------|----------------------------|------------------------------| | Graph4MM | 0.2185 ± 0.1259 | 0.0679 ± 0.0002 | | MMGL | 0.1649 ± 0.1033 | 0.0470 ± 0.0002 | All LLM-based graph learning methods—including ours and MMGL—have quadratic complexity and are typically limited to small graphs, but they provide strong zero-shot generalization capabilities beyond the reach of traditional GNNs. *** >**W2**: Also, the performance improvement might be dataset-specific, and more extensive testing is needed. **A2**: To further validate the effectiveness and generalization of our method, we introduce a new discriminative task—link prediction—on the Amazon-Sports dataset from the Multimodal Graph Benchmark (MM-Graph Bench) [1]. We randomly sample 10k source-target node pairs with equal numbers of positive and negative samples, and split the dataset into train/validation/test sets in an 8:1:1 ratio. To ensure a fair comparison, all methods use the OPT-125M backbone. The hyperparameter settings follow those used in our discriminative node classification experiments. We compare our method against the most representative and promising baselines used in our main paper, including: (a) pretrained language model using nodes' and subgraphs' text, (b) MMGL’s best-performing methods with both text and image modalities, \(c) our Graph4MM with Hop-Diffused MM-Qformer. The results, shown in Table 2, demonstrate that Graph4MM significantly outperforms all baselines, achieving an average improvement of 7.7% over the second-best method across all evaluation metrics. This further demonstrates the strong generalization ability of our method, showing its applicability across different datasets and scenarios. **Table 2**: Performance comparison in Link Prediction setting on Amazon-Sports dataset (OPT-125M Backbone). | Method | R-L | Acc (%) | Rec (%) | Pre (%) | |---------------------------------------|---------|---------|---------|---------| | (PLM) Node's Text | 0.1563 | 52.35 | 51.87 | 72.67 | | (PLM) Node's Subgraph | 0.2871 | 56.81 | 56.26 | 74.56 | | (MMGL) Node's Text & Image | 0.5603 | 93.92 | 93.86 | 95.51 | | (MMGL) Subgraph's Text & Image | 0.7352 | 95.46 | 95.44 | 94.35 | | (Graph4MM) Hop-Diffused MM-QFormer | **0.8904** | **99.98** | **99.97** | **99.86** | [1] Zhu, Jing, et al. "Multimodal graph benchmark." arXiv preprint arXiv:2406.16321 (2024). --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. After reading the rebuttal and the other reviewers' comments, I’m willing to raise my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer xJRk, Thank you very much for your follow-up and for raising the score! We sincerely appreciate your response and the time you have taken to review our rebuttal. We will ensure that all points from the rebuttal are carefully reflected in the final camera-ready version. Best regards, #8515 Authors
Summary: The paper presents Graph4MM, for modelling multi-hop relationship within and between texts and images, modelled as an undirected graph. This approach enables the foundation model to be aware of the structural information (through the graph topology). They introduce Hop-Diffused MM-QFormer for incorporating multi-hop connectivity information and achieve SOTA results on generative (summarization) and discriminative (zero-shot fashion classification) task. Claims And Evidence: The authors provide a theoretical proof that GAT is more prone to over-smoothing effects but I doubt the proof (elaborated in Theoretical Claims). It would be nice to have it support it with a simulation study and spectral analysis. Methods And Evaluation Criteria: Yes, the benchmarks make sense and the authors present a thorough comparison across models with ablations. Theoretical Claims: 1. Rather than applying masking *before* softmax, the authors decide to apply it apply it after softmax. That is, $A_{i, j} = M_{i,j} \cdot A_{i,j}'$, where $A'$ is a row stochastic matrix (because of softmax function) and $M$ is a binary mask. This means that $A$ is not necessarily a row stochastic matrix. Why did the author go for such design choice rather than the conventional choice of masking before softmax (which was also done in Wang et al for attention diffusion? \ For proposition A.1 it relies on row stochasticity, but the $A$ is no longer row stochastic, or am I misinterpreting something? 2. I am not convinced with Proposition C.2 for GAT. Could you elaborate how: \ a. the first order Taylor expansion (about which point?) yields the said softmax approximation \ b. is the softmax assumption valid? Because effectively, you approximate all the softmax functions to be identity (L747) which doesn't seem very valid. 3. While the authors perform theoretical analysis for infinite diffusion steps/GAT Layers, they keep the number of diffusion steps to be 2. For lower values of k, I am not sure how diffusion is better than GAT (assuming the correctness of Dirichlet Energy proofs) Experimental Designs Or Analyses: The experiments are well designed, results carefully analyzed and experiment details provided in the supplementary Supplementary Material: Yes, the proofs, representative examples, experimental setting Relation To Broader Scientific Literature: The paper is surely interesting because it not only achieves SOTA results on the two benchmarks but it also contains interesting findings ans results about incorporating the topological structure of the data for generation tasks Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. The paper not only attempts at achieving SOTA for the tasks but also provides general result of using pretrained vision-language representations and also multi-hop representations 2. The designed MM-QFormer incorporates multimodal information efficiently, and can be applied to other use cases as well 2. The paper is easy to read and well-presented Weaknesses: Aside from my reservations about the theoretical claims, I have following doubts: 1. The authors should include a comparison with GAT to make sure that the prowess of their method comes from (1) their way of graph modelling alone, or (2) diffusion/hop-aware attention alone, or (3) both. 2. The computational complexity of the method might impede its scalability 3. Comparison on only one generative and one discriminative task might not be enough I would be willing to increase my score if the authors are able to answer all my questions satisfactorily. Other Comments Or Suggestions: Some minor typos: 1. For proposition C.1, the authors use index $i$ for both -- row indexing and diffusion step indexing 2. Fig 2.: $t_{v_i}$ instead of $t_{i}$ and $p_{v_i}$ instead of $v_{i}$ for consistency 3. L356R "shown in" in place of "shownin" Questions For Authors: 1. Can the authors provide representative examples where GraphMM succeeds and MMGL fails? 2. Can you verify Table 2, row 5 (BLEU score for hop-aware -$t_{G'}$? Why is it significantly high? 3. Could you elaborate on R358-362? What do you mean by "modality-specific structural encoding"? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks very much for your constructive questions. We provide our response in the form of Q&A. > **Theoretical Claim (TC) 1**: Softmax after masking. In our implementation, we do apply row-wise softmax after causal masking as $A_{i:}= \text{Softmax}(M_{i:} \odot A'_{i:})$. This ensures that $A$ is row-stochastic and compatible with our analysis in Proposition A.1. We will revise equations in line 212 for clarity, provide pseudocode in the Appendix, and release our code upon publication. *** > **TC2**: Elaborate the approximation in Proposition C.2 for GAT. a. To strengthen our proof, we adopt a more general assumption inspired by [1] that avoids the approximation with Taylor expansion. Specifically, we assume the activation function satisfies $0 \le \frac{\sigma(x)}{x} \le 1$ for $x \ne 0$ and $\sigma(0) = 0$, which holds for common activations such as ReLU and LeakyReLU. We define $\sigma(0)/0 := \sigma'(0)$ or 1 if $\sigma'(0)$ is undefined. Under this, any activation $\sigma(y)$ can be rewritten as $\sigma(y) = \mathrm{diag}\left( \frac{\sigma(y)}{y} \right) y$. b. We further generalize our proof as follows by removing the previous identity-matrix-based approximation and reach the same conclusion. We denote $D_i^{(k)}$ as the diagonal transformation matrix induced by the activation at layer $k$, satisfying $\text{diag}(0) \preceq D_i^{(k)} \preceq \text{diag}(1)$. The representation $X_i^{(k+1)}$ can be recursively expanded as: $ X_{.i}^{(k+1)} = \sum_{j_{k+1}=i, (j_k, ..., j_0) \in [d]^{k+1}} ( \prod^k_{l=0} W_{j_lj_{l+1}}^{(l)} ) D_{j_{k+1}}^{(k)} A^{(k)} \cdots D_{j_1}^{(0)} A^{(0)} X_{.j_0}^{(0)}$, where $A^{(k)}$ is a row-stochastic aggregation matrix. Since all $D^{(k)}$ and $A^{(k)}$ have spectral norm less than or equal to 1, and each weight matrix $W^{(k)}$ is bounded, it follows that $\||X^{(k)}\|| \leq c^k \cdot \||X^{(0)}\||$ for some constant $0 < c < 1$. The Dirichlet energy is given by: $\mathcal{E}(X^{(k)}) = \frac{1}{n} \text{Tr}\left( X^{(k)\top} L X^{(k)} \right)$, where $L$ is the graph Laplacian. Applying the Rayleigh-Ritz Inequality yields: $\mathcal{E}(X^{(k)}) = O(\gamma^k \cdot \mathcal{E}(X^{(0)}))$ for some $\gamma \in (0, 1)$, indicating exponential decay of energy. [1] Demystifying Oversmoothing in Attention-Based Graph Neural Networks. NeurIPS 2023. *** > **TC3**: Attention Diffusion vs GAT under small $k$ for over-smoothing. To examine the over-smoothing behavior under small $k$, we further conducted a simulation study on the Cora dataset. We report the Dirichlet Energy of representations produced by GAT/Hop-diffused Attention across $k=0$ to $4$ in Table 1. (1) As $k$ increases, Hop-Diffused Attention shows slower decay than GAT. (2) Additionally, the Dirichlet Energy gap of two methods is already evident at $k=1$, suggesting that our choice of $k=2$ in the main experiments is valid. **Table 1**: Simulation results for Dirichlet Energy comparison. |Method|K=0|K=1|K=2|K=3|K=4| |---|---|----|----|----|----| |Hop-Diffused Attention|3.2445|3.0105|2.9504|2.8360|2.7843| |GAT|3.2445|1.2307|1.0293|0.8027|0.3730| *** >**Weakness (W) 1**: Disengage effects of graph modeling and Hop-diffused attention. Table 2 shows that (1) our graph modeling—adding multimodal neighbors—improves performance (Row 1→2), and (2) only hop-aware attention (Row 2→4) yields consistent gains over GAT-based structure modeling (Row 2→3), confirming the effectiveness of both components. **Table 2**: Ablation on graph and structural modeling (OPT-125M). |Method|BLEU-4|ROUGE-L|CIDEr| |------|---|----|----| |Node's T & I|0.0643|0.3825|0.6371| |Subgraph's T & I|0.0788|0.4051|0.7790| |GAT for Graph Tokens|0.0796|0.4052|0.7736| |Ours(Hop-Diffused)|**0.0800**|**0.4076**|**0.7831**| *** >**W2**: Computational complexity of Graph4MM. Due to space limits, please see [Section W1 in our response to Reviewer xJRk](https://openreview.net/forum?id=FB2e8PV6qg&noteId=qC65trdQn8). We show our method has the same complexity order as the baseline and mild runtime overhead. *** > **W3**: Should add new setting for comparison. Due to space limits, please see [Section W2 in our response to Reviewer xJRk](https://openreview.net/forum?id=FB2e8PV6qg&noteId=qC65trdQn8) for complete comparison results under a new setting. *** >**Q1**: Representative examples where GraphMM succeeds and MMGL fails. Below is a representative case where MMGL confuses node vs. neighbor features, while our structure-guided fusion yields the correct prediction. [Link to the example.](https://anonymous.4open.science/r/Graph4MM-3F30/representative%20example.png) *** >**Q2**: Why is BLUE in Table 2 high? BLEU-4 measures exact n-gram overlap. The inflation is due to the prediction containing more exact matches of domain-specific phrases from the ground truth. *** >**Q3**: Meaning of "modality-specific structural encoding". It refers to applying different structural modeling (Hop-Diffused / Hop-Aware) to each modality (vision or text). --- Rebuttal Comment 1.1: Comment: Dear authors, thanks for answering all my queries. I am satisfied with the answers. I do agree with the fellow reviewers on similarity to previous works like QFormer and limited benchmarking, I am now positive about Graph4MM. Therefore, I increase my score to 3. Regards. --- Reply to Comment 1.1.1: Comment: Dear Reviewer MweU, Thanks very much for your reply! We are more than excited to learn of your satisfaction and appreciation for our rebuttal answers. Answering your raised questions, along with those from other reviewers, improves the paper's quality. We promise to add all rebuttal answers to the updated camera-ready version. Thanks again! #8515 Authors
null
null
null
null
null
null
Is Complex Query Answering Really Complex?
Accept (spotlight poster)
Summary: Authors carefully analysed limitations of benchmark datasets used in current SOTA neural KG queries, showing that they reflect only the performance of predicting answers where only one link is truly missing and that FB15k237 and NELL995 are not suitable to precisely assess performances of QA systems. Authors proposed a new benchmark dataset and tested main SOTA QA systems, showing performances of all these systems dropped significantly. Claims And Evidence: Authors' claims are strongly supported by reproducing experiments of existing SOTA systems and by conducting their own experiments with their new datasets. Methods And Evaluation Criteria: Authors very carefully conducted their experiments and analysed existing benchmark datasets. Theoretical Claims: Authors clearly listed three takeaways as follows. 1 Not to use current CQA benchmarks as they essentially reflect only the performance of predicting answers where only one link is truly missing (for both hybrid and neural solvers). 2 FB15k237 and NELL995 are not suitable to precisely assess the capability of CQA methods to answer complex queries, as most of their QA pairs can be predicted by just predicting a single missing links. This results in highly inflated performance that distorts the perception of progress in the field and pushes the community to improve the performance only on the easiest QA pairs. 3. New benchmarks have a balanced amount of partial-inference and full-inference QA pairs that depends on the query type, allowing to measure CQA performance while not distorting the aggregate performance across all QA pairs with different hardness. Additionally, ICEWS18+H highlights that more realistic sampling strategies are more challenging for current SoTA. Experimental Designs Or Analyses: The experiment design is quite straight forward and easy to understand. Supplementary Material: Authors provided substantial supplementary material to support their claims. Relation To Broader Scientific Literature: The key contributions of this paper are related to the broader scientific literature. Essential References Not Discussed: At my knowledge, authors discussed major SOTA papers in the field. Other Strengths And Weaknesses: A piece of beautiful work! Weakness: authors do not explain at methodological level why these SOTA CQA systems fail for complex queries. Other Comments Or Suggestions: no further comment. Questions For Authors: If we make a decision by tossing a coin, we will have 50% of accuracy. But, we do not believe we are doing logical reasoning. Should there be a minimum value of accuracy, below which we shall not call a system is doing logical reasoning (here logical query)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for considering our paper a piece of beautiful work and considering our experiment design easy to understand. We proceed by answering their questions. > authors do not explain at methodological level why these SOTA CQA systems fail for complex queries. In Table 2 we showed that the performance of all SoTA methods consistently increased for QA pairs that have a higher number of existing links in their reasoning tree. This is evidence that most ''good'' performance of CQA can be explained by their memorization ability. We conclude that all methods lose performance in the new benchmarks because they overly rely on memorized facts, limiting their ability to predict answers when there are many missing links in their reasoning tree. We will add this conclusion explicitly in the camera-ready. >If we make a decision by tossing a coin, we will have 50% of accuracy. But, we do not believe we are doing logical reasoning. Should there be a minimum value of accuracy, below which we shall not call a system is doing logical reasoning (here logical query)? We see where the reviewer is leading, but for CQA, even if query answering under missing links is described using logical operators, the results it produces are derived in a mixed deductive-inductive process that returns a ranking of answer entities (measured by MRR) rather than a set of logical conclusions (which are usually meant to be derived by deduction and whose correctness might be measured in terms of accuracy). Additionally, since knowledge graphs contain thousands of entities, a randomly selected ranking would result in an MRR close to zero. Hence, it is less informative to set a hard MRR threshold to determine whether a model can be considered capable of reasoning.
Summary: This paper investigates the problems in current benchmarks for complex query answering over knowledge graphs. The main points are: 1. Previous benchmarks failed to effectively evaluate the capabilities of CQA methods because most test queries can be reduced to simpler queries, 2. Using the proposed benchmark that alleviates this problem, there is no method that can claim SOTA with a clear advantage. ## update after rebuttal The rebuttal solves some of my main concerns. I have adjusted my evaluation accordingly. Claims And Evidence: Please refer to the “Experimental Designs Or Analyses” section because these are mainly related to experiments Methods And Evaluation Criteria: The authors “balance each query type a QA pair can be reduced to”, which is confusing to me, why is this a necessary choice? And how the balancing coefficient would affect the final results? Theoretical Claims: N/A Experimental Designs Or Analyses: The authors claim that no method can claim SOTA with a clear advantage using the proposed benchmark. However, a lot of important baselines are not included, from well-established methods like Query2Box and BetaE and more recent strong baselines like LMPNN and CLMPT (and more). It’s crucial to report the performance of important methods on a new benchmark, instead of only choosing a few among others. Supplementary Material: I had a quick look at most of the contents in the appendix, especially the results on negation queries and the experiment on the number of intermediate existing entities. Relation To Broader Scientific Literature: To my best knowledge, this work focuses on CQA over KGs and does not explicitly relate to the broader scientific literature/community. Essential References Not Discussed: This work can benefit from discussing more related works, especially more recent works, including but not limited to: - https://arxiv.org/abs/2301.08859 - https://dl.acm.org/doi/abs/10.1145/3589334.3645569 - https://dl.acm.org/doi/abs/10.1145/3637528.3671869 Other Strengths And Weaknesses: It would be great if the results of queries with negation can be included in the main content Other Comments Or Suggestions: Section 6, numbers under subsection “Building new CQA benchmarks” should be 10,000 instead of 10.000. Questions For Authors: - Is there a specific reason to exclude the results and discussions on negation queries in the main content? It’s quite important because most recent CQA methods can deal with negations and use queries with negations for evaluation. - A lot of CQA methods use the data splits in the BetaE data, where the training data may also have the same query reduction problem. Is the new benchmark really harder and more suitable for evaluating CQA methods, or is it out of the training data distribution and thus not that convincing to be used for evaluation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We believe the main concerns of the reviewer can be fully addressed, as discussed in the comments below. We hope the score can be raised. >The authors “balance each query type a QA pair can be reduced to”, which is confusing to me, We acknowledge that the usage of the term ''balance'' can lead to misunderstandings. We should rather say, ``We compile the benchmark such that all query types (e.g., 3p) a QA pair can be reduced to (e.g., 1p, 2p, and 3p) appear with the same frequency in it’’. We will clarify this in the camera-ready. >why is this a necessary choice? And how the balancing coefficient would affect the final results? This is necessary because sub-types reductions in the old benchmark are not equally represented, while 1p reductions are greatly over-represented (see Table 1). As you can see in Fig. 3, by compiling the benchmark such that all query types a QA pair can be reduced to appear with the same frequency in the benchmark, the overall results significantly decrease. An explanation for this phenomenon is that in the new benchmarks, we evaluate the full spectrum of hardness rather than the easiest QA pairs (i.e., the ones reducible to 1p). > from well-established methods like Query2Box and BetaE and more recent strong baselines like LMPNN and CLMPT. Over the last years dozens of methods for CQA have been released, and it’s highly impractical to evaluate and compare all of them. For such a reason, we have selected a method for each category. For geometric embedding methods, we selected ConE, as it is an established baseline that can handle negation and has outperformed, e.g., BetaE, and Query2Box. For methods based on GNN, we chose GNN-QE, as it provides competitive performance compared to LMPNN and CLMPT on NELL995, but outperforms them on FB15k-237. That said, we also ran additional experiments on the new benchmarks with CLMPT, being the most recent baseline among the three, and our preliminary results confirm that our claims remain unchanged. In fact, similarly to the other baselines, CLMPT's performance drops on the new benchmark, but it remains competitive. For instance, below is the MRR comparison of CLMPT and the best-reported baseline for 2p, 3p, 2i, and 3i queries on FB15k-237+H: | Method | 2p | 3p | 2i | 3i | |--------------|------|------|------|------| | CLMPT | **5.3** | **4.7** | 10.2 | 12.2| | best-baseline | 5.2 | 4.0 | **11.3** | **12.8**| We will report the full results in the camera-ready. > This work can benefit from discussing more related works We will extend App. C to include the methods pointed out by the reviewer in the SoTA discussion. >It would be great if the results of queries with negation can be included in the main content. Specific reason to exclude the results and discussions on negation queries in the main content? We agree with the reviewer, we will use the additional page in the camera-ready to include results on negative queries in the main paper. We did not do it now just for space constraints. >A lot of CQA methods use the data splits in the BetaE data, where the training data may also have the same query reduction problem. Please note that we have not changed the data splits, rather we have changed the way the benchmark is created (see Sec 6, and App G). Also note that the query reduction problem can only occur at valid/test time, as it happens when certain ***training data*** makes the prediction of a QA pair easier, effectively *reducing* the QA pair to a simpler type (see Fig. 1). > Is the new benchmark really harder and more suitable for evaluating CQA methods, or is it out of the training data distribution and thus not that convincing to be used for evaluation? We remark that we only change the way the benchmark is created to ensure that the distribution is not skewed towards the easiest QA pairs. Moreover, apart from 4p and 4i queries, ***we keep the same query structures used in the old benchmarks***. Then, following [1], we could consider the query types not included in the training, i.e, ip,pi,2u,up, as well as 4p and 4i to be OOD, while considering the rest of the query types in-distribution. We also remark that there is no ground truth to compare the frequency of query patterns in the new benchmarks ***nor in the old benchmarks*** to reflect a particular real-world application. We highlight that there is no indication that the overrepresented frequency of 1p reductions (see Table 1) in the old benchmarks is representative and it skews the perception of progress. [1] Bai, Y., Lv, X., Li, J., & Hou, L. (2023, July). Answering complex logical queries on knowledge graphs via query computation tree optimization. In International Conference on Machine Learning(pp. 1472-1491). PMLR.
Summary: This paper critically examines the validity of current benchmarks for complex query answering (CQA) on knowledge graphs (KGs). The authors argue that existing benchmarks (e.g., FB15k237, NELL995) overrepresent "easy" queries that reduce to simpler tasks like link prediction (1p), distorting perceived progress in CQA. They demonstrate that up to 98% of test queries in these benchmarks can be answered by predicting only one missing link, even for ostensibly complex query types (e.g., 2i1p). To address this, the authors propose new benchmarks (FB15k237+H, NELL995+H, ICEWS18+H) that balance query difficulty by ensuring answers require reasoning over multiple missing links. Experiments show state-of-the-art (SoTA) CQA methods perform significantly worse on these harder queries, revealing their reliance on memorizing training data. The paper introduces a hybrid solver (CQD-Hybrid) combining neural link prediction with graph traversal and highlights the need for benchmarks that better reflect real-world KG incompleteness. Claims And Evidence: 1. Current CQA benchmarks are skewed toward easy queries reducible to link prediction. Systematic analysis of FB15k237/NELL995 shows 86.8–98.3% of complex queries (e.g., 2i1p) reduce to 1p (Table 1). 2. SOTA methods overperform due to memorization of training links. Performance drops sharply on full-inference queries (Table 2). Hybrid solvers (CQD-Hybrid, QTO) outperform neural models by leveraging training links. 3. Union queries (2u, 2u1p) are artificially hard due to non-existing links. Filtering non-existing links improves MRR (Table 3). Methods And Evaluation Criteria: - **Proposed Benchmarks**: FB15k237+H, NELL995+H balance query types and ensure full-inference answers. ICEWS18+H uses temporal splits for realism. Addresses distribution bias in prior benchmarks. Temporal splits better mimic real-world KG updates. But no discussion of whether the new distributions reflect real-world query patterns (e.g., frequency of 4p/4i queries). - **CQD-Hybrid**: Combines neural link prediction with graph traversal. Demonstrates hybrid methods’ superiority (Table A.3). Limited comparison to other hybrid methods (e.g., FIT). Theoretical Claims: The paper makes no explicit theoretical claims. The reasoning tree formalism (Sec. 3) is intuitive but lacks formal proofs. Experimental Designs Or Analyses: 1. Re-evaluating SoTA models on reduced query types (Tables 2, A.5–A.6) is rigorous. 2. Missing statistical significance tests for performance differences. 3. Removing non-existing links (Table 3) is valid but lacks details on filtering criteria. Supplementary Material: Reviewed Appendices A.1 (negation analysis), A.3 (CQD-Hybrid details), and A.5–A.6 (full results). The negation analysis (Table A.1) strengthens the main claims but lacks visual examples. Relation To Broader Scientific Literature: - Builds on prior CQA frameworks (Hamilton et al., 2018; Ren et al., 2020) and neural models (Arakelyan et al., 2021; Zhu et al., 2022). - Extends critique of benchmark validity similar to issues in link prediction (Kadlec et al., 2017). - Hybrid solvers (CQD-Hybrid, QTO) align with recent trends in neuro-symbolic reasoning (Bai et al., 2023; Yin et al., 2024). Essential References Not Discussed: - **FIT** (Yin et al., 2024): A concurrent hybrid solver with score calibration, which could contextualize CQD-Hybrid’s performance. - **BetaE** (Ren & Leskovec, 2020): A model handling negation, not discussed in negation analysis. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Suggestion: Include error bars in Tables 2/A.5–A.6. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for providing a precise summary of the paper and for considering our analysis rigorous. We proceed by answering their questions, believing all can be easily addressed. > no discussion of whether the new distributions reflect real-world query patterns (e.g., frequency of 4p/4i queries) There is no ground truth to compare the frequency of query patterns in the new benchmarks ***nor in the old benchmarks*** to reflect a particular real-world application. We highlight that there is no indication that the overrepresented frequency of 1p reductions (see Table 1) in the old benchmarks is representative, and it skews the perception of progress. > Limited comparison to other hybrid methods (e.g., FIT) As we argue in Section 5 (lines 323-325, left column), and as stated in the FIT paper (Appendix G.2 and Table 5 of FIT), the FIT method works like QTO on the query types that we consider in our evaluations. Therefore, we do not include an evaluation of FIT. > BetaE not discussed in negation analysis. We chose a baseline method for each relevant SoTA category. Concerning geometric embeddings (e.g., BetaE, Query2Box, ConE), we selected ConE as an established baseline that can handle negation, and it was shown to outperform BetaE. > no explicit theoretical claims. The reasoning tree formalism (Sec. 3) is intuitive but lacks formal proofs In Sec 3, we introduce the concept of “reasoning tree” in the running text to syntactically delineate trivial, inference-based, partial-inference and full-inference QA pairs. We do not make formal claims in Section 3 that could be proven. In line 210, we should rather say “We hypothesize…” (rather than “We claim…”), and this hypothesis is later empirically validated, but we do not see any way how it could be proven formally since the hardness that we refer to is the effectiveness of making correct predictions (and not a soundness or algorithmic complexity statement). > Missing statistical significance tests for performance differences By running a Mann-Whitney paired U test, we obtain low p-values in the reciprocal rank distribution of answers between models, confirming that our claims remain unchanged from those reported in our tables. For example, when executing the test for the reciprocal ranks of 2p QA pairs computed with CQD and CQD-hybrid we obtain p-value of 1.5e-4. We will include the full results in the camera-ready. > Removing non-existing links (Table 3) is valid but lacks details on filtering criteria We filter out all query-answer pairs that contain non-existing links in their reasoning tree. For example, for 2u QA pairs, if either one of the two links in the reasoning tree does not exist in the graph, we remove the QA pair. (see Figure A.1 (Bottom) for an example). It follows that only QA pairs where both links in the reasoning tree exist in the graph are retained (see Figure A.1 (Top) for an example). We will include this description in the camera-ready. > The negation analysis (Table A.1) strengthens the main claims but lacks visual examples. The visual example would be similar to the example we provided in Figure 1, and that’s the reason we did not include one. Is there a specific reason why the reviewer thinks there should be one? We can effortlessly add one in the camera ready.
null
null
null
null
null
null
null
null
How Do Images Align and Complement LiDAR? Towards a Harmonized Multi-modal 3D Panoptic Segmentation
Accept (poster)
Summary: This paper presents Image-Assists-LiDAR (IAL), a new panoptic 3D multi-modal segmentation method combining camera images with LiDAR point clouds. The key contributions are PieAug, multi-modal data augmentation with aligned data, Geometric-guided Token Fusion (GTF) for effective cross-modal fusion of features, and Prior-based Query Generation (PQG) with geometric (LiDAR) priors in a transformer decoder, with texture (image) priors. IAL achieves state-of-the-art results for nuScenes (PQ=82.0%) and SemanticKITTI (PQ=63.1%) Claims And Evidence: They are generally backed by extensive experiments and ablations. The significant improvements in PQ in typical benchmarks supports the performance of each of the proposed modules. Methods And Evaluation Criteria: The chosen methods and datasets (nuScenes and SemanticKITTI) are apt and standard for the evaluation of panoptic segmentations in the 3D domain. The proposed modules (PieAug, GTF, PQG) are well-motivated, addressing apparent limitations in the prior work. Theoretical Claims: There were no proofs given or required. Experimental Designs Or Analyses: Experimental design is solid and rigorous. Ablation experiments nicely demonstrate the contribution of each component. Authors should nevertheless indicate whether performance improvements are primarily the result of new fusion strategies or the application of pre-trained strong image models (SAM, GroundingDINO). Supplementary Material: Yes, reviewed. Supporting material includes worthy in-depth ablation studies justifying one design choice at a time and in-depth definitions. Relation To Broader Scientific Literature: The paper correctly positions itself in the context of the most up-to-date multi-modal panoptic segmentation techniques (Panoptic-FusionNet, LCPS). It correctly identifies the limitations of the state-of-the-art approaches (post-processing-intensive, misaligned augmentation) and correctly positions its contributions (aligned augmentation, fusion using transformers) in the context of the state-of-the-art. Essential References Not Discussed: The work does not mention comparisons or discussions with other top-performing LiDAR-only methods including LidarMultiNet [AAAI'23](PQ~81.4%, mIoU~82.2% on nuScenes test set) and PUPS [AAAI'23] (PQ~64.4% on SemanticKITTI validation set). A mention of those will give a better context to the performance gains and clearly highlight the strength of their multi-modal approach. Other Strengths And Weaknesses: Strength Transparent methodological developments (PieAug, GTF, PQG) Strong experimental validation with comprehensive ablations. Clear text with good figures presented. Open Source Code Release Commitment Weakness Exclusion of high-performance LiDAR-only methods (for example, PUPS, LidarMultiNet) from comparison There is no mention of the efficiency of inferences or real-world runtimes of heavy external image models (SAM, GroundingDINO). Other Comments Or Suggestions: Minor improvements could include explicitly comparing inference speed and model size against baseline models. Questions For Authors: 1. How quickly is IAL’s inference compared to its complexity and reliance on external models like SAM? Could it be run in real-time? 2. How significant is the contribution of the use of strong pre-trained image models compared to the new fusion strategy? Would less complex models be able to produce comparable results? 3. How robust is IAL when a modality is highly degraded (i.e., low image quality at night)? Does it degrade gracefully to the performance of LiDAR alone? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thoughtful and valuable feedback. ### C1: Contribution of performance improvement As in Table 8 of our supplementary material, using only texture-prior and no-prior queries (row 3) performs worse than the combination of all three query types (last row). This indicates that the performance improvements primarily stem from our novel fusion strategy—the combination of 3 types of queries in the PQG module—rather than solely relying on pre-trained strong image models, which only influence texture-prior query generation. ### C2: Discussions with other top-performing LiDAR-only methods Thank you for suggesting these two references. We will include them in the final version and provide the following discussion: **LidarMultiNet**: IAL outperforms LidarMultiNet on **all key panoptic segmentation metrics** despite using only a single frame and standard annotations, whereas LidarMultiNet leverages **extra** temporal frames, detection annotations, and multi-task learning. While LidarMultiNet slightly surpasses IAL in mIoU, this metric primarily reflects 3D semantic segmentation and is not the main evaluation metric for 3D panoptic segmentation. Moreover, some of LidarMultiNet’s effective components, such as GCP and two-stage, **do not conflict with** our multimodal method and could potentially be integrated into IAL to further enhance multi-modal 3D panoptic segmentation performance. **PUPS**: PUPS exibits inconsistent performance across datasets. Specifically, it significantly outperforms the baseline method Panoptic-PHNet on SemanticKITTI (Table 1 & 2 of PUPS) but achieves only comparable performance to Panoptic-PHNet on nuScenes validation set (Table 3 of PUPS). In contrast, IAL consistently achieves superior result over Panoptic-PHNet on both datasets. Since PUPS has not released its code for reproducibility, we hypothesize that its inconsistent performance trend may stem from overfitting to the simpler SemanticKITTI. Compared to SemanticKITTI, nuScenes presents greater challenges due to its larger scale, more diverse domains. IAL attains the highest performance on nuScenes, further demonstrating its robustness and practical effectiveness. ### C3: Inference speed and model size comparison Good catch! We provide a comparison between across all methods in the table below. | Model | Mask Proposal Method | FPS | Params (M) | PQ | |--|--|--|--|--| | LCPS | - | 1.7 | 77.7 | 79.8 | | IAL | Grounding DINO + SAM | 0.9 | 859.9 | 82.3 | | IAL* | Grounding DINO + SAM | 4.0* | 81.8* | 82.3 | | IAL | Mask R-CNN | 2.7 | 123.8M | 81.7 | To isolate the impact of external image models (SAM, GroundingDINO), we exclude 2D preprocessing step and report the efficiency of the remaining pipeline in 3rd row (denoted by *). Notably, the inference speed of our core pipeline is faster than LCPS, showing the efficiency of our design. We replace the *heavy* 2D preprocessing step (GroundingDINO and SAM) with a *lightweight alternative*, Mask R-CNN (ResNet50 backbone), as reported in 4th row. This variant strikes a balance between speed and performance, running at 2.7 FPS (1.6× faster than LCPS) while maintaining 81.7 PQ. These results indicate that while external 2D mask generation models introduce additional computational costs, our **query generation design remains flexible**, accommodating different 2D mask proposal models to **balance high-quality performance and fast inference speed**. More importantly, our **core framework remains highly efficient and robust** to different 2D preprocessing choices. Also, further optimizations such as model pruning or quantization could further accelerate inference. ### C4: Robustness when a modality is degraded Great suggestion! We evaluate the performance of IAL by comparing the LiDAR-only branch with the full model on the **nighttime** split (602 scans) of the nuScenes val set (6,019 scans), where image quality is significantly degraded. The results, (63.2 vs 70.5 PQ), show that despite the low image quality, our IAL (full model) still achieves a **+7.3\% PQ gain** over the LiDAR-only model. This improvement can be attributed to two factors: 1. **Modality-synchronized augmentation (PieAug)**, which exposes the model to more diverse samples, including nighttime scenarios, by mixing synchronized LiDAR and image data. It allows the model to generalize better to rare conditions like night scenes. 2. The combination of **3 types of queries** in PQG, where no-prior queries complement the texture-prior and geometric-prior queries, helping the model to effectively identify potential instances. Additionally, pre-trained Grounding-DINO and SAM models contribute to enhancing the robustness of 2D mask generation in night conditions, as they have been trained on vast amounts of data, improving generalization to such challenging scenarios. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed rebuttal and for addressing several of my points clearly. However, the original omission of comparisons and discussions with other top-performing methods and the high complexity and practical deployment challenges still weakens the paper’s context regarding significance. Based on the overall originality and significance, I maintain my original score. --- Reply to Comment 1.1.1: Comment: Again, we sincerely thank Reviewer #DcuJ for carefully reading and acknowledging our rebuttal. We truly appreciate your continued engagement and the recognition that some of your concerns have been addressed. Below, we would like to further clarify two points to directly address your remaining concerns. ### R1: No Comparison with LidarMultiNet and PUPS in the Original Submission For ​**​LidarMultiNet​**​, we did not initially include it because it adopts a ​**​different experimental setting​**​ from ours and other baselines. Specifically, it utilizes additional temporal frames, detection annotations, and multi-task learning, making direct comparison challenging. Furthermore, LidarMultiNet is ​**​not open-sourced​**​, preventing a fair reproduction of its results for evaluation under the common setup. Regarding ​**​PUPS​**​, as explained in our rebuttal, it exhibits ​**​inconsistent performance trends​**​ across datasets. While it significantly outperforms Panoptic-PHNet (CVPR'22) on SemanticKITTI, it only performs comparably to Panoptic-PHNet on nuScenes, which raises concerns about its generalizability. Additionally, PUPS does ​**​not release official code​**​, making it infeasible to reproduce its results for a fair evaluation. For a clear comparison, we provide comparison results on the nuScenes and SemanticKITTI in Table 1. Only val set used due to the inconsistent LidarMultiNet's test result in paper vs. leaderboard. Importantly, on nuScenes, a more challenging benchmark for multi-modal 3D scene understanding, our method achieves significant improvements over all baselines, including LidarMultiNet and PUPS. We believe this strongly validates the effectiveness of our approach. Nevertheless, we acknowledge the importance of providing a broader context and will include a discussion of these two works in our final version. ||nuScenes|||||SemanticKITTI||||| |-|:-:|-|-|-|-|:-:|-|-|-|-| |Method|PQ|PQ+|RQ|SQ|mIoU|PQ|PQ+|RQ|SQ|mIoU| |LiDARMultiNet|81.8|-|89.7|90.8|82.0|-|-|-|-|-| |PUPS|74.7|77.3|83.3|89.4|-|64.4|68.6|74.1|81.5|-| |IAL|82.3|84.7|89.7|91.5|80.6|63.1|66.3|72.9|81.4|66.0| ​*Table 1: Method Comparison. "–" denotes results not available (no code)​* ### R2: High Complexity and Practical Deployment Challenges We appreciate your feedback and understand your concerns. However, we emphasize that our method is both complexity-flexible and robust, making it practical for real-world deployment. |Row|Model|Mask/Box Proposal|FPS|Params(M)|PQ (%)| |-|-|-|-|-|-| |1|LCPS|– |1.7|77.7 |79.8| |2|IAL|Grounding DINO + SAM|4.0*|81.8*|82.3| |3|IAL|Grounding DINO + SAM|0.9|859.9|82.3| |4|IAL-V1|HTC|2.4|218.4|81.9| |5|IAL-V2|Mask R-CNN|2.7|123.8|81.7| |6|IAL-V3|Grounding DINO 1.5 Edge|3.8|– |81.3| ​*​Table 2: Model Performance Comparison​*​ First, as outlined in our rebuttal, our approach allows for flexible adaptation of the 2D mask generation (preprocessing) module, enabling deployment under different complexity and efficiency constraints. As shown in Table 2, the core design of our method (2nd row) is both highly effective and efficient compared to the main baseline, LCPS (1st row). Additionally, we introduce two more alternatives for the 2D mask generation module, HTC [1] and Grounding DINO 1.5 Edge [2], as shown in 4th and 6th rows. The results highlight the trade-off between accuracy and efficiency: when prioritizing accuracy, we can use Grounding-DINO and SAM (3rd row) - a more complex yet powerful module - to achieve superior performance (+2.5% PQ over LCPS), albeit with a lower inference speed. On the other hand, when efficiency is prioritized, adopting a lighter 2D mask generation model, such as Grounding DINO 1.5 Edge, yields higher FPS (>2x faster than LCPS) while still maintaining performance improvements over LCPS. Notably, across all 2D mask generation choices (rows 3 to 6), our IAL consistently outperforms LCPS, demonstrating the robustness and adaptability of our approach. ​ |Model|Full Val Set|Night Split|Rain Split| |-|-|-|-| |# of scan|6019|602|1088| |LCPS|79.8|64.3|76.8| |IAL|​**​82.3​**|​**​70.5​**​| ​**​81.2​**| *​Table 3: Robustness Evaluation Under Adverse Conditions​* Second, our method maintains strong performance across various environmental conditions. Beyond the nighttime evaluation included in our rebuttal, we further assess our method under another challenging scenario, the "rain" split, compared to the main baseline, LCPS. As shown in Table 3, our IAL significantly outperforms LCPS in these adverse conditions, reinforcing its robustness and practical applicability. [1]Hybrid Task Cascade for Instance Segmentation. CVPR19 [2]Grounding DINO 1.5: Advance the Edge of Open-Set Object Detection. arXiv24 **We hope these additional experiments address your concerns and further demonstrate the significance of our contributions.**
Summary: Aiming at 3D panoptic segmentation, the paper proposes using a transformer decoder to directly predict class labels and mask outputs. The authors further introduce a Geometric-guided Token Fusion (GTF) module and a Prior-based Query Generation (PQG) module to obtain effective queries and fuse tokens as input. In addition, a multi-modal data augmentation strategy is designed to augment both 3D and 2D inputs. The experiments show the effectiveness of the proposed method. Claims And Evidence: The claims are reasonable and supported. PieAug augments both modalities while maintaining synchronization across them. The GTF module addresses mismatches between modalities and scale issues by mapping the eight corner points during fusion, and the PQG module leverages priors from both modalities to initialize queries for better localization and performance. Methods And Evaluation Criteria: The proposed designs are logical, and the chosen evaluation benchmarks are suitable for the task. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experiments are conducted on two standard outdoor datasets, demonstrating the effectiveness of the proposed method. However, the ablation study is somewhat limited. For example, GTF requires projecting all the corner points for computing positional encoding and token fusion, and PQG uses three sets of queries, with two query groups needing external proposal models for initialization. It would be beneficial to know the computational efficiency of the method and the parameter count compared to previous methods. Supplementary Material: I reviewed the additional ablation studies provided in the supplementary material. Relation To Broader Scientific Literature: The proposed designs complement the broader literature and offer new insights. Essential References Not Discussed: The paper should also reference existing fusion modules proposed in multi-modal 3D segmentation literature, such as + ICCV 2023, UniSeg: A Unified Multi-modal LiDAR Segmentation Network and the OpenPCSeg codebase + ICLR 2025, Multimodality Helps Few-shot 3D Point Cloud Semantic Segmentation Other Strengths And Weaknesses: The paper is well-written and easy to follow. The motivations for the design choices are clear and reasonable. Other Comments Or Suggestions: Please see the Questions. Questions For Authors: GTF requires projecting all the corner points for computing positional encoding and token fusion, and PQG uses three sets of queries, with two query groups needing external proposal models for initialization. Could the authors provide the efficiency analysis such as comparing the computational cost and the parameter count with previous methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive feedback and constructive suggestions. ### C1: Suggestion for adding additional references. Thank you for pointing out these two papers. We will add them in the final version and provide the following discussions: 1. **UniSeg (ICCV’23)** proposes a unified multi-modal fusion strategy for LiDAR segmentation, focusing on fusing voxel-view and range-view LiDAR features with image features. However, UniSeg primarily targets semantic segmentation and does not explore how to utilize multimodal information for instance segmentation, which focuses on thing classes. In contrast, our design of *prior-based query generation* is specifically tailored for 3D instance segmentation. Furthermore, while UniSeg employs Cartesian voxelization, our method uses *cylindrical voxelization*, which has been proven more effective for panoptic segmentation in prior works like PolarNet (CVPR'20) and Cylinder3D (CVPR'21). We also incorporate *Geometric-guided Token Fusion* (including scale-aware positional encoding) to better accommodate this cylindrical representation. 2. **MM-FSS (ICLR’25)** focuses on few-shot 3D semantic segmentation in indoor scenarios by leveraging both explicit textual modalities and implicit 2D modalities for cross-dataset adaptation. This differs significantly from our work, which focuses on fully-supervised multi-modal 3D panoptic segmentation in outdoor scenarios. Additionally, the fused modalities in MM-FSS (text, image, and point cloud) differ from our approach, which uses multi-view images and LiDAR point clouds. ### C2: Could the authors provide the efficiency analysis such as comparing the computational cost and the parameter count with previous methods? Thank you for your suggestion! We provide a comparison of inference speed (FPS), parameter count, and performance (PQ, the primary evaluation metric) between our method and LCPS, the main baseline, as shown in the table below. | Model | Mask Proposal Method | FPS | Params | PQ | |-- |-- |-- |-- |-- | | LCPS | - | 1.7 | 77.7M | 79.8 | | IAL (ours) | Grounding DINO + SAM | 0.9 | 859.9M | 82.3 | | IAL (ours)* | Grounding DINO + SAM | 4.0* | 81.8M* | 82.3 | | IAL (ours) | Mask R-CNN | 2.7 | 123.8M | 81.7 | To better understand the computational cost and model size of our framework’s major components, we report a lightweight variant of our model (denoted by \* in the 3rd row), which excludes the 2D preprocessing step (Grounding-DINO and SAM). This analysis reveals that the main computational cost and parameter count stem from the sophisticated 2D preprocessing step. However, our core components - including GTF, geometric-prior query generation, 2D and 3D encoder, and transformer decoder - are highly efficient, achieving 4.0 FPS with only 81.8M of parameters. To further evaluate efficiency, we replace Grounding-DINO and SAM with Mask R-CNN (ResNet50 backbone), a lightweight 2D mask generation alternative, as reported in the 4th row. This variant achieves a well-balanced trade-off between speed and performance, with the full pipeline running at 2.7 FPS (1.6× faster than LCPS) while maintaining 81.7 PQ (+1.9% over LCPS). These results demonstrate that while external proposal models contribute to computational cost, our core method remains highly efficient and adaptable to different 2D preprocessing choices. --- Rebuttal Comment 1.1: Comment: My concerns have been addressed by the authors' detailed rebuttal. I think it meets the acceptance threshold and I have increased my score to recommend its acceptance. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your time, thoughtful feedback, and updated recommendation. We’re glad to hear that our responses have addressed your concerns, and we deeply appreciate your recognition of the value of our work. We are especially grateful for your positive assessment that our “proposed designs (PieAug, GTF, and PQG) are reasonable” with “clear motivation”, that their “effectiveness is supported” by the experiments, and that “they can complement the broader literature and offer new insights.” Your insightful comments have played a key role in strengthening our paper, and we truly appreciate your valuable contribution to its improvement.
Summary: This work proposed a novel framework for multi-model 3D panoptic segmentation. By leveraging the proposed modality-synchronized data augmentation (PieAug) with geometric-guided token fusion (GTF) and prior-based query generation (PQG), this work achieves new state-of-the-art (SOTA) performance on two challenging benchmarks, i.e. nuScenes and semantic-KITTI. Claims And Evidence: The authors claim the proposed PieAug can align multi-camera augmentation with corresponding LiDAR cylindrical voxels. This is proved by eq. 1 - 5 (theoretically), Fig. 3 (qualitatively), and Tab. 5 & 6 (quantitatively). Authors also claim that GTF can better align LiDAR and multi-camera image features by eliminating projection errors and obtaining sufficient fields to fully utilize dense image features. This is supported by eq. 6 - 8 plus Fig. 2 (theoretically) and Tab. 5 (quantitatively). The final claim is that PQG can provide sufficient proposals for panoptic segmentation using all kinds of priors. This is proved by Tab. 1 (theoretically) and Tab. 5 (quantitatively). Methods And Evaluation Criteria: The methods are novel and interesting. Extensive experimental results support the claims and demonstrate its value for multi-modal 3D panoptic segmentation tasks. The evaluation criteria are standard. Theoretical Claims: As stated in the claims and evidence, the theoretical claims are correct and supported by quantitative results. Experimental Designs Or Analyses: The experimental designs are correct and solid. The analyses are sufficient and insightful both in the main paper and supplementary material. Supplementary Material: The supplementary material is generally good. It will be even better if authors can provide the qualitative comparison for ablation and more qualitative results over datasets. Relation To Broader Scientific Literature: I believe the key contributions of the paper are related to the broader scientific literature, especially the proposed GTF. This interesting multi-modal features fusion will be quite valuable for other 3D perception tasks that are using multiple sensors. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: - The proposed PieAug is interesting and seems quite effective. It solves the misalignment between modalities which is quite important and overlooked in the previous works. - The proposed GTF is novel and meaningful. It provides a better way to align multi-model features and can be used in the popular transformer architectures. - PQG is also novel and interesting, which derives from the observation of using ground truth (GT) center position. This finding is quite valuable to 3D panoptic segmentation. Weakness - Fig. 3 is too far from the Sec. 3.1, where the PieAug is introduced. It will be much better for the reader to get the full pictures of PieAug without page jumps and affect the reading. - The impact statement seems to be on Page 9. I believe it should be in the supplementary material, or the overall contents management needs to be more careful. Other Comments Or Suggestions: - It will be nice to have a qualitative comparison among the ablation studies so that readers can understand how much the PieAug, GTF, and PQG affect the prediction. - Tab. 8 is important for the readers to know how different prior queries affect the final results. It might be better to move it in the main paper. Questions For Authors: - Will the effectiveness of texture queries change a lot when changing the mask prior? For example, replacing Grounding SAM with Mask DINO or Mask R-CNN? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their constructive feedback and positive comments on our contributions. ### C1: Fig. 3 is too far from the Sec. 3.1 Thank you for the suggestion. We will update the placement of Fig. 3 in the final version to ensure that PieAug is introduced and illustrated in a more cohesive manner. ### C2: The impact statement on page 9 should be in the supplementary material Thank you for the suggestion. However, we followed the official guidelines, which recommend placing the impact statement in a separate section at the end of the paper, co-located with the Acknowledgements, before the References. ### C3: Add qualitative comparison among ablation studies Great suggestion! Unfortunately, due to the constraints of the rebuttal policy, we are unable to include images or provide links to them on the OpenReview platform. However, we will add these qualitative comparisons in the final version of the paper. ### C4: Moving Tab. 8 to the Main Paper Thank you for your suggestion. We agree that Table 8 is crucial for showing how different prior queries impact the results. We will move it to the main paper in the final version. ### C5: Will the mask prior strongly impact texture queries' effectiveness? E.g., replacing Grounding SAM with Mask DINO or Mask R-CNN? Good catch! We have replaced the mask generation module (Grounding-DINO and SAM) with alternatives, including Mask R-CNN (as you suggested) and its follow-up, the HTC model [1], and report the results in the table below. As shown, the performance of our model remains robust to the choice of mask priors from different mask proposal methods, with only a minimal performance drop when using Mask R-CNN or HTC. This indicates that our texture-prior query generation design is flexible, accommodating different 2D mask proposal models, and can be integrated with advanced off-the-shelf methods (in the future) for further performance improvements. | **2D Mask Proposal** | **PQ** | |---------------------------------|--------| | Mask R-CNN (ResNet50) | 81.7 | | HTC (ResNeXt101) | 81.9 | | Grounding-DINO + SAM | 82.3 | [1] Chen Kai, et.al. Hybrid task cascade for instance segmentation. CVPR'19. --- Rebuttal Comment 1.1: Comment: Thanks to the authors' efforts in the rebuttal. After checking other reviewers' feedback and authors' replies, I believe this work is valuable to 3D Panoptic Segmentation tasks and tend to accept it. I am willing to raise the score to 4 and hope the authors can revise the writing accordingly and release the code after acceptance. --- Reply to Comment 1.1.1: Comment: Dear reviewer, We sincerely thank you for your thoughtful feedback, the time you took to consider other reviewers' comments, your updated recommendation, and your in-depth engagement with our work. We’re glad that our responses have addressed your concerns and are especially grateful for your recognition of the value of our work. We appreciate your recognition that our proposed methods—PieAug, GTF, and PQG—are "*novel, interesting, and meaningful*", and that our claims are "*well-supported by sufficient evidence*" and "*solid experimental design*". We're particularly pleased that you found the GTF module to be a valuable contribution not only to our specific task but also to the broader field of multi-sensor 3D perception. We will further refine the paper based on your suggestions, and we will release our code to support the community. Thank you again for your constructive feedback and kind support. Best wishes, All authors
Summary: This paper proposes a new multi-modal framework for multi-modal 3D panoptic segmentation. First, the authors adopt a PieAug strategy to ensure consistency across LiDAR and image data during data augmentation. Then, they use a Geometric-Guided Token Fusion mechanism to fuse LiDAR and image tokens. Finally, several types of queries—geometric-prior queries, texture-prior queries, and no-prior queries—are used to obtain the final segmentation results. The authors conduct experiments on the nuScenes and Semantic KITTI datasets and demonstrate that the proposed method achieves better performance than state-of-the-art (SOTA) methods. ## update after rebuttal The rebuttal address most of my concerns. I increase my rating to weak accept. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. nuScenes and SemanticKITTI are popular used dataset in LiDAR panoptic segmentation tasks. Theoretical Claims: There are not theoretical claims in this paper. Experimental Designs Or Analyses: Yes. I check the main experiments and the ablation studies. Supplementary Material: Yes. I check the supplement to ablation studies. Relation To Broader Scientific Literature: 1. Prior methods also adopt prior queries\proposals from LiDAR or Image modality to enhance the performances in LiDAR detection tasks, such as TransFusion and FSF. This paper adopts prior queries in the task of panoptic segmentation. 2. Prior methods adopt image painting, cross-attention, deformable attention to fuse multi-modal information. This paper adopt directly add fusion with scale aware positional embedding. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths 1. The paper is well-written, with clear presentation and illustrations. 2. The authors provide detailed ablation studies to demonstrate the effectiveness of each component. 3. The authors conduct experiments on nuScenes and SemanticKITTI and achieve state-of-the-art (SOTA) performance. Weaknesses 1. The adoption of cylindrical representation in the LiDAR branch lacks justification. There is no explanation as to why cylindrical representation is preferred over Cartesian representation. The scale ambiguity problem only exists in cylindrical representation. If Cartesian representation were used, the scale-aware position embedding might not be necessary, as the scale of each voxel would be consistent. 2. The overall pipeline is computationally expensive. The framework incorporates additional image models, such as Grounding-DINO and SAN, to extract 2D masks and use them as texture-prior queries. Comparing the proposed method with other methods without considering FLOPs or latency is unfair. 3. The idea of adopting proposals from heatmaps (e.g., TransFusion) or image modalities (e.g., Fully Sparse Fusion ) as priors is not novel. As shown in the supplementary materials, combining texture priors with geometric priors does not achieve better performance. The authors hypothesize that this is due to an excessive number of prior-based queries. However, adding no-prior queries increases the total number of queries but results in better performance. The authors should provide a more convincing explanation for this observation. Other Comments Or Suggestions: No. Questions For Authors: Please refer to the Weaknesses in Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback on the voxel representation, the model efficiency, and the design of query initiation. Regarding your questions, we provide the following detailed explanation. ### Q1: Justification of the adoption of cylindrical representation in the LiDAR branch. While Cartesian voxelization ensures uniform scales, it inherently struggles with the density imbalance in LiDAR point clouds. Due to **radial sparsity**—where points are denser near the sensor and sparser at a distance—Cartesian voxels fail to accommodate this variation, potentially causing uneven feature distribution and information loss, particularly in distant regions. The advantage of cylindrical over Cartesian voxel representations is well-documented in prior work, e.g., Cylinder3D (in CVPR’21, Sec. 3.2) and PolarNet (in CVPR’20, Sec 3.3), showing superior performance through adaptive partitioning of sparse distant points. Additionally, adopting cylindrical representation aligns with high-performing baselines like P3Former (IJCV’25) and LCPS (ICCV’23), ensuring consistency and fair comparison. ### C2: Computation cost comparison We compare inference speed (FPS), model size (Params), and performance (PQ, the primary evaluation metric) between our method and LCPS, the main baseline. Additionally, we report a lightweight variant of our model, denoted by *, which excludes the 2D preprocessing step (Grounding-DINO and SAM) to highlight the efficiency of our framework’s major components. Since the 2D preprocessing can be replaced with any off-the-shelf model, this variant demonstrates the trade-off between speed and performance. All latency measurements are conducted on a single NVIDIA A40 GPU with batch size 1. For fair comparison, we measure LCPS latency using its official codebase on our hardware. | | Mask Proposal Method | FPS | Params (M) | PQ | |--|--|--|--|--| | LCPS | - | 1.7 | 77.7 | 79.8| | IAL* | Grounding DINO+SAM | 4.0 | 81.8 | 82.3| | IAL | Grounding DINO+SAM | 0.9 | 859.9 | 82.3| | IAL | Mask R-CNN | 2.7 | 123.8 | 81.7| As shown in the table above, with Grounding-DINO and SAM, IAL achieves slightly lower FPS than LCPS (0.9 vs. 1.7) but **significantly improves PQ** (82.3 vs. 79.8), making it ideal for accuracy-critical applications. Removing the 2D preprocessing step (row 2) makes our model 2.4× faster than LCPS. The choice of 2D preprocessing is flexible, balancing accuracy and efficiency. For instance, replacing Grounding-DINO and SAM with **Mask R-CNN** (ResNet50 backbone, IAL-MaskRCNN, row 4) yields 2.7 FPS (1.6× faster than LCPS) while maintaining a strong PQ of 81.7 (+1.9\% over LCPS). ### C3.1: Novelty of prior-based query generation We respectfully disagree with the comment. Our prior-based query generation (PQG) is both novel and well-motivated, as acknowledged by reviewer #DcuJ. PQG fundamentally differs from TransFusion and FSF in both design and query generation mechanism. From a design perspective, unlike TransFusion and FSF, which solely rely on prior-based queries, our no-prior query solution specifically targets objects missed by geometric (3D) and texture (2D) priors, a critical gap for cross-modal fusion that has been **unaddressed in prior works**. In terms of query generation, PQG introduces a **more advanced** approach that enhances prior utilization. While TransFusion fuses image features with BEV LiDAR features for heatmap prediction (which doesn't fully leverage the rich texture features and may harm each modality during fusion), our PQG module generates 2D and 3D prior-based queries independently. This allows us to fully leverage the rich texture information without information loss and perform late fusion, better complementing the two modalities. As for FSF, they do not explicitly use queries; instead, they extract potential instances from both 2D and 3D branches and merge them via self-attention for final prediction. Our 2D query generation differs in that we use clustering to group points within a frustum and then apply Farthest Point Sampling (FPS) to select potential candidates, effectively mitigating outliers (e.g., background points). In contrast, FSF assumes all points within a mask frustum belong to the same semantic object, which can cause semantic ambiguity. ### C3.2: Performance explanation when adding "no-prior queries" We would like to clarify that we **keep the total number of queries the same** (as stated in L616-617) across all settings in Table 8. This ensures that the observed differences in performance are not influenced by the total number of queries, but instead reflect the contribution of different query combinations. Thus, the performance improvement when adding no-prior queries cannot be attributed to an increased number of queries, but rather to the complementary benefits of no-prior queries in enhancing the model’s ability to handle missed objects.
null
null
null
null
null
null
Eigen Analysis of Conjugate Kernel and Neural Tangent Kernel
Accept (poster)
Summary: This paper analyzes the spectral properties of deep feedforward neural networks with random weights, focusing on Gaussian mixture model inputs. It rigorously examines isolated eigenvalues in the conjugate and neural tangent kernels, showing how they capture group features and evolve through hidden layers, influenced by covariance differences and activation functions. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I have checked all of the proofs. Experimental Designs Or Analyses: Yes. Supplementary Material: I have reviewed the entire supplementary material. Relation To Broader Scientific Literature: The most relevant work is (Gu et al., 2024), which serves as a complementary study to this paper. Essential References Not Discussed: NO. Other Strengths And Weaknesses: Strengths: 1 This paper studies the Stieltjes transforms of CKs and NTKs using the deterministic equivalents provided by previous work (Gu et al., 2024). The work is highly thorough, and the theoretical contribution is solid. 2 The precise characterization of the isolated eigenvalues is very interesting. Weaknesses: 1 Regarding technical novelty, the spectra of CKs and NTKs with linear width have been well studied by (Fan & Wang, 2020). This paper, however, explores the case of extremely large width. The prior width setting is more practical and extends beyond lazy learning, with the development of a new theoretical framework. Moreover, the deterministic equivalents of CKs and NTKs were already provided in previous work (Gu et al., 2024), and this paper examines the same NN model. Standard random matrix tools can easily derive results for the Stieltjes transforms, isolated eigenvalues, and eigenvectors. The technique used here is standard. If there is any new technical contribution I missed, please let me know. 2 While this paper provides very complete results, the relevance between the theoretical findings and learning tasks is weak. Particularly for readers unfamiliar with RMT, the theoretical results may not offer novel insights. For example, (Gu et al., 2024) applies their theoretical results to design new compression methods. For this paper, my suggestion is to discuss the impact of the isolated eigenvalues on training dynamics or generalization performance. It could provide further insight. Overall, I am positive about this paper, but I encourage the authors to further explore the theoretical results in the context of learning. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their kind and encouraging feedback on our paper: "The work is highly thorough, and the theoretical contribution is solid. The precise characterization of the isolated eigenvalues is very interesting". This positive recognition motivates us greatly. Below, we provide detailed point-by-point responses (marked with $\Huge{\cdot}$) to address all identified weaknesses. The statements of weaknesses have been condensed to comply with character limitations. > Regarding technical novelty,..., let me know. * Fan and Wang (2020) proved the limiting spectral distributions of both CK and NTK under the linear width assumption. Subsequently, Wang et al. (2024) established the asymptotic behavior of spiked eigenvalues and eigenvectors of CK. In this paper, while our theoretical results are derived using the standard random matrix framework based on the pioneering work of Gu et al. (2024), they reveal intricate phenomena involving the isolated eigenvalues and eigenspaces of neural networks. Specifically, as outlined in Section 3.2, the first-order limits of entries in isolated eigenvectors of both CK and NTK may or may not encode group features. To clarify the differences between Wang et al. (2024) and our study, we provide the following comparisons. 1. We allow for different population covariance matrices $C_a$ in the GMM data, without relying on the $\tau_n-$orthonormal assumption introduced in Wang et al. (2024). Specifically, the $\tau_n-$orthonormal assumption requires that the difference between $tr(C_a)$’s be of order $o(p^{2/3})$, whereas our framework only requires this difference to be $O(p)$. 2. We do not require $X^TX$ to contain isolated eigenvalues, as specified in Assumption 2 of Wang et al. (2024). Instead, the isolated eigenvalues of CK and NTK can arise from the group membership structure of the input data, i.e., through the terms $VA_{\ell}V^T$ and $VB_{\ell}V^T$ in our equations (5) and (8). 3. They assume isolated eigenvalues are simple, focusing on single eigenvectors in their main theorems. In contrast, our work accommodates isolated eigenvalues with multiplicities greater than one and analyzes the corresponding eigenspaces. 4. The two studies adopt different assumptions regarding the activation functions. Their work employs weaker conditions on the moments of the derivatives, whereas ours imposes weaker constraints on the expectations of the first and second derivatives. 5. Given NTK's ability to approximate the generalization and dynamics of neural networks (NNs) on real data, our work explores the properties of NTK which are not addressed in Wang et al. (2024). Additionally, we conducted experiments on the MNIST dataset under conditions that satisfy the linear width assumption. The empirical results support the existence of distinct isolated eigenvectors as revealed by our theoretical findings. Please click the following link to view the results: <https://anonymous.4open.science/r/eig5492/README.md>. > While this paper provides very complete results, ..., provide further insight. * By analyzing NTK in ultra-wide NNs, our theoretical findings provide insights into the dynamics of training in such architectures. Let $\boldsymbol{y}$ denote the label vector of the data. When the loss function is defined as $L(t)=\|\|\boldsymbol{y}-\hat{\boldsymbol{y}}(t) \|\|^2$, where $\hat{\boldsymbol{y}}(t)$ represents the prediction at time $t$, the time evolution of the residual is approximately given by the ODE: $$\frac{d}{dt} \hat{\boldsymbol{y}} (t)= \boldsymbol{K}_{\text{NTK}} \big(\boldsymbol{y} -\hat{\boldsymbol{y}} (t)).$$ This indicates that the NTK captures the dynamics of the training process in ultra-wide NNs. We draw the following statistical implications: 1. Based on our theoretical results from Theorem 3.7 to Remark 3.9, the first-order limits of entries in the isolated eigenvectors may or may not contain group features. 2. When the eigenspace associated with the largest isolated eigenvalue contains group features, NNs tend to prioritize learning this information. Conversely, if this eigenspace lacks group features, NNs instead prioritize learning irrelevant information, diverting attention away from effective group features. 3. For non-isolated eigenspaces, we conjecture that their first-order limits contain significantly fewer group features, as observed in classical random matrices like the sample covariance matrix, where the entries of non-spiked eigenvectors are delocalized under mild conditions. Consequently, after effectively learning the group features captured by eigenspaces corresponding to isolated eigenvalues, NNs primarily shift their focus to learning from eigenspaces that lack first-order group features. In GMM setting, this implies that NNs begin leveraging more noise to learn $\boldsymbol{y}$, resulting in overfitting. This potentially provides a perspective based on NTK to understand the phenomenon of overfitting. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed rebuttal. While I do not think that the inherent limitations of this paper can be fully addressed in the rebuttal period, I appreciate the authors' efforts. I will revise my score to 4 as an encouragement. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for his/her kind and encouraging support.
Summary: The paper explores the eigenvalue spectrum of the conjugate kernel (CK) and the neural tangent kernel (NTK) with random weights. The authors demonstrate the existence of isolated eigenvalues and present a theoretical approach to identifying where they lie and their possible impact on the model. ## Update after rebuttal After the rebuttal period, I felt the authors expanded on their key points in a suitable way based on the criticism. However, I did not feel that they clearly outlined the important points and conclusion of the paper and how it builds on the current state of the art. I feel that with additional experiments and discussion related to their conclusions, the paper would be a stronger contribution. However, the work was well performed and so I am happy to raise my score by one point. Claims And Evidence: There are very few claims made in the paper rather the authors highlight specific results and theoretical insights. To this end, there is evidence to support their claims. Methods And Evaluation Criteria: Very few benchmarks and datasets are used to demonstrate the paper's main results. It focuses far more on theoretical advances. Theoretical Claims: I did not see any obvious theoretical issues but did not check all equations closely. Experimental Designs Or Analyses: The authors do not present a variety of experiments or testable hypotheses. Supplementary Material: I did not review the supplementary information extensively. It does appear that the authors support each of their claims with extensive details in the SI. Relation To Broader Scientific Literature: There are many references made to alternative research performed on the spectra of the NTK or CK matrices. The authors do not relate their work too extensively to this literature, mostly introducing their own ideas. Essential References Not Discussed: More citations could be included regarding similar work studying the spectra of the NTK. However, it would not support the experiments or results of the manuscript. Other Strengths And Weaknesses: The paper doesn't explain why these results are novel or relevant. The results appear well validated and perhaps interesting. However, how can a machine learning practitioner, theoretical or otherwise, use these results to learn more about their networks/activation functions or training? Other Comments Or Suggestions: I would suggest that the authors apply their analysis to more common network architectures and try to connect their results to understandable results in machine learning. Questions For Authors: 1. What role do the authors expect the isolated eigenvalues play in the networks during training? 2. How do you expect the results to change when applied to more architectures? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive questions. In the following, we provide point-by-point responses (marked with $\Huge{\cdot}$). > The paper doesn't explain why these results are novel or relevant. The results appear well validated and perhaps interesting. However, how can a machine learning practitioner, theoretical or otherwise, use these results to learn more about their networks/activation functions or training? * We thank the reviewer for this insightful comment. Below, we clarify the novelty and practical implications of our findings. 1. Theoretical Insights into NTK and CK: - Our theoretical results in Section 3 reveal that the first-order limits of entries in isolated eigenvectors, derived from both NTK and CK, may or may not encode group features. - To empirically verify this, we conducted an experiment using the MNIST dataset. A histogram showing the spectrum of the third layer's CK matrix and line plots of the eigenvectors for the top 5 eigenvalues can be found at the following link: <https://anonymous.4open.science/r/eig5492/README.md>. The spectrum shows four isolated eigenvalues, with the eigenvector corresponding to the largest eigenvalue being informative, while the other three appear non-informative. These observations align with our theoretical findings from Theorem 3.4 to Remark 3.6. 2. Training Dynamics in Ultra-Wide Neural Networks (NNs): - By analyzing NTK in ultra-wide NNs, we uncover deeper implications regarding the dynamics of training in such architectures. Let $\boldsymbol{y}$ represent the label vector of the data. When the loss function $L(t)=\|\|\boldsymbol{y}-\hat{\boldsymbol{y}}(t) \|\|^2 $, where $\hat{\boldsymbol{y}}(t)$ denotes the prediction at time $t$, the time evolution of the residual is approximately given by the ODE: $\frac{d}{dt} \hat{\boldsymbol{y}} (t)= \boldsymbol{K}_{\text{NTK}}\big(\boldsymbol{y} -\hat{\boldsymbol{y}} (t)).$ This indicates that the NTK captures the dynamics of the training process in ultra-wide NNs. - When the eigenspace associated with the largest isolated eigenvalue contains group features, NNs tend to prioritize learning this information. Conversely, if this eigenspace lacks group features, NNs instead prioritize learning irrelevant information, diverting attention away from effective group features. - For non-isolated eigenspaces, we conjecture that their first-order limits contain significantly fewer group features, as observed in classical random matrices like the sample covariance matrix, where the entries of non-spiked eigenvectors are delocalized under mild conditions. Consequently, after effectively learning the group features captured by eigenspaces corresponding to isolated eigenvalues, NNs primarily shift their focus to learning from eigenspaces that lack first-order group features. In GMM, this implies that NNs begin leveraging more noise to learn $\boldsymbol{y}$, resulting in overfitting. This potentially provides a perspective based on NTK to understand the phenomenon of overfitting. 3. Impact of Activation Functions on Group Features Encoding: - As demonstrated in Figure 4 of our paper, different activation functions yield distinct behaviors in the eigenvalues and eigenvectors. A carefully selected activation function (e.g., ReLU in Figure 4(b) and 4(d)) can effectively encode group features within NNs, facilitating the integration of group features from the initial stages. - These findings suggest that careful selection of activation functions can significantly enhance a NN's ability to capture and represent group features. > What role do the authors expect the isolated eigenvalues play in the networks during training? * Please refer to our response to the previous question, specifically points 1 and 2, which address the role of isolated eigenvalues in networks during training. > How do you expect the results to change when applied to more architectures? * We believe that isolated eigenvalues and eigenspaces are likely to exist across various NN architectures (e.g., fully connected neural networks (FCNs) with dropout), provided the input data exhibits certain classification characteristics. Investigating these isolated eigenvalues and eigenspaces in different NN structures remains a largely unexplored problem. However, under current theoretical frameworks, it is challenging to directly prove the existence of isolated eigenvalues and eigenspaces in architectures different from FCN. For instance, whether the spectral equivalence described in Lemma 2.4 and Lemma 2.5 holds for other NN architectures remains unknown. We aim to address these related challenges in our future work. Furthermore, in the revision we will include more references on the spectra of NTK and CK matrices, such as Pennington and Worah (2017, NIPS), Yang and Salman (2019, arxiv), Hron et al. (2020, ICML), Murray et al.(2022, arxiv), Wang et al. (2023, NIPS), and Belfer (2024, JMLR). --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed explanation of the core results. I still feel that the message is convoluted at times and if it were presented clearly, the authors would benefit greatly. I have a few comments regarding the rebuttal. * "may or may not encode group features" is quite ambiguous. What should I take from that sentence? * From my understanding, it is widely known that eigenvectors of dominant eigenvalues are important in training, but this was also extended to additional vectors in understanding evolution, e.g., in https://arxiv.org/abs/1812.04754. But to argue this definitively, the authors would need to show that it holds on more datasets. It could simply be true that this only occurs on the dataset you studied. * Why is it surprising that the NTK captures training dynamics in ultra-wide networks? The matrix describes the state of a network given its data and if this state changes, so too will the NTK. If the network is very wide, this change becomes very slow / small which corresponds to a small change in the NTK. Eventually, this would be seen a small enough change for the difference in NTK states to be descriptive. * Activation functions helping to discern features in data is of interest, but I would like to be convinced that this analysis goes further than other works. * Perhaps something that would help with explaining the results here. If the authors wanted to fit a problem "perfectly", ideal data distribution and network, what would the spectrum need to look like? --- Reply to Comment 1.1.1: Comment: We are truly grateful to the reviewer for the prompt feedback. Below, we provide point-by-point responses (once again marked with $\Huge{\cdot}$). > "may or may not encode group features" is quite ambiguous. What should I take from that sentence? * By the expression "may or may not encode group features", we mean that the eigenspace corresponding to isolated eigenvalues can either be informative or non-informative, as defined in Definition 3.3 of our paper. For your convenience, we attach the definition here: > Definition 3.3. We say the eigenspace defined in (20) is informative if there is a nonzero matrix $\mathbf{A}(\boldsymbol{J})$ depending on $\boldsymbol{J}$ such that $$\|\|\frac{1}{p}\boldsymbol{J}\^T\widehat{\boldsymbol{\Pi}}\_{\rho}\boldsymbol{J}-\mathbf{A}(\boldsymbol{J})\|\|=o_{a.s.}(1).$$ Otherwise if $\mathbf{A}(\boldsymbol{J})=0$, then the eigenspace is non-informative. - Recall that $\boldsymbol{J}=[\boldsymbol{j}\_1,...,\boldsymbol{j}\_K]\in\mathbb{R}\^{n\times K}$, where $\boldsymbol{j}\_a=\\{\delta\_{{x_i}\in\mathcal{C}\_a}\\}\_{i=1}^n$ encodes the group membership information in GMM. The matrix $\mathbf{A}(\boldsymbol{J})$ reflects information about the projection of $\boldsymbol{J}$ onto the eigenspaces corresponding to isolated eigenvalues. A nonzero value indicates the eigenspace contains membership information, i.e., it encodes group features. In constrast, a zero value implies that such eigenspace is orthogonal to $\boldsymbol{J},$ rendering it non-informative and not encoding group features. - Our theoretical results in Section 3.2 demonstrate that the informativeness of an eigenspace corresponding to an isolated eigenvalue is jointly determined by the covariance structure of the input data and the choice of activation function. Specifically, this is established in Theorem 3.4—Remark 3.6 for CK, and Theorem 3.7—Remark 3.9 for NTK. - Furthermore, previous experiments on MNIST (<https://anonymous.4open.science/r/eig5492/README.md>) align with our theoretical findings. The spectrum exhibits four isolated eigenvalues, with the eigenvector corresponding to the largest eigenvalue being informative (clearly distinguishing two classes), while the remaining three seem to be non-informative. > From my understanding, ... studied. * Following your suggestion, we conducted additional experiments using another dataset, CIFAR-10. Similar to the MNIST dataset, we present the spectrum histogram and eigenvector plots; see Figures 1-3 at the new link: <https://anonymous.4open.science/r/C5492-FC8C/README.md>. The spectrum shows nine isolated eigenvalues, with the four largest eigenvectors being informative and the remaining five non-informative—similar to observations from MNIST. > Why is it surprising that the NTK ... descriptive. * We appreciate your insightful comments and fully agree with the points you've raised. To clarify, what surprises us is not the observation that the NTK captures training dynamics in ultra-wide NNs. Rather, what we find interesting—and what our study highlights through explicit eigen analysis of the NTK—is that even within the low-dimensional top subspace, the eigenspaces corresponding to isolated eigenvalues can differ significantly in informativeness. While some isolated eigenspaces are indeed informative, encoding meaningful group features, others may be non-informative. This nuanced finding, as detailed in our response to your first comment, offers us a deeper understanding of the NTK's role in training dynamics. > Activation functions ... works. * Regarding isolated eigenvalues and eigenvectors in CK and NTK, the most recent study we are aware of is Wang et al. (2024) on CK. In our response to Reviewer Hb59's first comment, we provided a detailed comparison with their work. Specifically, in terms of activation functions, their work employs weaker conditions on the order of derivatives, while ours imposes weaker constraints on the expectations of the first and second derivatives. Moreover, we allow the use of different activation functions across different layers, providing enhanced flexibility in activation function choices for practical applications. > Perhaps something ... look like? * If we understand the comment correctly, it inquires about the structure of the CK or NTK spectrum in the ideal case where the data, network, and all assumptions in our paper are satisfied. We believe that typical spectra of CK and NTK exhibit the following structure: the non-zero eigenvalues ($\lambda_n\le\ldots\le\lambda_1$), except for the isolated ones, are clustered within one or more intervals. Within each interval, the eigenvalues are tightly grouped $(\lambda_i-\lambda_{i+1}=o_p(1))$. In contrast, the isolated eigenvalues are distinctly separated from these intervals as well as from zero. For reference, we conducted an experiment to visualize this structure; please see Figure 4 at the new link: <https://anonymous.4open.science/r/C5492-FC8C/README.md>.
Summary: This paper analyzes the eigenvalues and eigenvectors of the Conjugate Kernel (CK) and Neural Tangent Kernel (NTK) for deep feedforward networks with random weights, in a high-dimensional setting where the input data come from a Gaussian Mixture Model (GMM). The authors show that, under certain assumptions, “spiked” or isolated eigenvalues of the CK and NTK can emerge outside the main (bulk) support of the limiting spectral distribution. They then precisely characterize the limiting positions of these outlier eigenvalues and analyze the asymptotic structure of the eigenvectors (or eigenspaces). The main conceptual point is that these isolated eigenvalues reflect group (or cluster) structure in the data—i.e. the eigenvectors encode the mixture components. Intuitively, the paper demonstrates how features of the data propagate layer by layer in a randomly initialized deep network, indicating how certain group structure is captured in these outlier modes. Overall, it provides theoretical insight into how CK/NTK spectra depart from their bulk distribution in the presence of class- or cluster-relevant structure. ## update after rebuttal I am happy with the rebuttal responses and will maintain my originally high score 4 (Accept). **Important caveat** I do agree with reviewer `Hb59` that a proper discussion of Wang et. al. 2024 is highly warranted given the high degree of similarity. If the paper is accepted, Authors should be super clear about what is new or different in their results compared to the existing results. Claims And Evidence: While this is a theoretical contribution and the support of claims is primarily theoretical, here are limited experiments that support the key claims: - Main Claim 1: “Certain eigenvalues of the CK/NTK can lie outside the bulk of the spectral measure in high-dimensional regimes.” Evidence: The authors present formal statements (Theorem 3.1 & 3.2) they show simulations that show outliers numerically. - Main Claim 2: “These outlier eigenvalues correspond to group or cluster structure in the data.”(Theorem 3.4 & Corollary 3.5). Evidence: They provide numerical plots (eigenvectors strongly correlate with mixture class membership). - Main Claim 3: “The position of these isolated eigenvalues and the structure of their eigenvectors can be characterized by expansions related to the data covariance and the choice of activation function.” Evidence: The paper provides explicit formulas (involving terms α_ℓ,2, α_ℓ,3, etc.) for the CK and NTK spiked eigenvalues. The simulations with polynomial vs. ReLU activations illustrate how the location of outliers depends on activation. Methods And Evaluation Criteria: the methods and evaluation approach (matching theoretical spectral predictions to empirical ones) are well-aligned with the paper’s theoretical goals: - The authors’ primary method is an asymptotic random matrix theoretical derivation of eigenvalue limits. They compare theoretical predictions with histograms of empirical eigenvalues from random networks. - The data are synthetic or follow a controlled GMM with different class covariances, which is reasonable for theoretical validation. Theoretical Claims: - The main theorems (Theorem 3.1 & 3.2) rely on finite-rank perturbation arguments from random matrix theory. The approach of showing that the empirical kernel matrix is well-approximated by a low-rank correction to a known limiting distribution is standard. - The authors reference prior results on deterministic equivalents for CK and NTK from (Gu et al., 2024) and from classical RMT references (e.g., Bai & Silverstein). The extension to identifying spiked eigenvalues and projecting onto the eigenspace (Theorem 3.4) is also sound - While I did not check all the proofs in depth, I did not spot an obvious algebraic or conceptual flaw, they seem sensible in the sense that they rely on well known established lemmas (like Woodbury identity, resolvent expansions, Schur complement formula, etc.). However the proofs are quite technical and dense, and I might have missed something. - One area that I am less certain about: In a multi-layer setting, different features are tied/coupled together, and they are not necessarily a Gaussian matrix to approximate their eignevalues. Can authors explain how they can seeming go around this dependencies to be able to apply free probability on deep representations? What are the key facts/lemmas that enable this? Experimental Designs Or Analyses: Overall, the experimental design matches the paper’s theoretical claims and is valid for a proof-of-concept demonstration: - The experiments primarily involve plotting eigenvalue histograms and the top eigenvectors for various settings (e.g. polynomial vs. ReLU activation). This is sufficient for a theoretical paper. - The dimensionalities (n, p) used are large enough (e.g. n=1200, p=600) to demonstrate the asymptotic effect, and the results show visually distinct outliers. Supplementary Material: I only looked at the supplementary proofs section A but did not follow proofs in depth (which I outlined in the theoretical claims section): the Appendix outlines the detailed proofs of lemmas about the resolvent expansions, the derivation that the spiked eigenvalues converge, and how the projection onto the corresponding eigenspace is computed. known RMT lemmas (e.g., from Bai & Silverstein) is adequately cited and typically repeated in the supplementary for completeness. I did not notice major inconsistencies . Relation To Broader Scientific Literature: The paper extends and complements a line of research studying neural kernels in high dimensions using random matrix theory. - CK/NTK in Infinite Width: Prior works (Jacot et al. 2018;) derived closed-form expressions for infinite-width kernels but did not analyze a mini-batch of samples, and hence analyze spiked eigenvalues beyond the main spectrum. The present paper looks at how mixture-structure in data can cause outliers in large finite dimension. - Spiked Models in RMT: The approach of analyzing finite-rank deformations of a baseline limiting distribution is reminiscent of Baik et al. (2005) and other spiked covariance matrix models. Here, the baseline is the limiting kernel, and the finite-rank perturbation arises from class-structured data. - Empirical CK/NTK Spectra: Related to works by Fan & Wang (2020) or Wang et al. (2024), who studied the spectral distribution in large neural networks. This paper approach is similar but focuses specifically on outliers and their eigenvectors in presence of cluster / class structure. - Feature Learning & Data Clusters: The demonstration that the outlier eigenvectors align with class membership echoes analyses in Worth improving: - “Neural Collapse”: I thin the connection of this result to neural collapse worth discussing. While the paper is about a random state / initialization, one could view this class-structure as a form of neural collapse or other clustering phenomena in wide networks, but from a pure RMT perspective. This might be interesting for some readers. - "Feature learning": because we know that classical NTK is not leading to learning dynamics, it might be worth making some statements of how these results could affect feature learning in middle layers? In that sense, some brief mention and contrast with feature learning literature (Tensor Programs, muP literature) might also be worthwhile. Essential References Not Discussed: the paper already cites much of the very relevant literature. But it might benefit from some other citation that are more broadly related: - Neural Collapse (Papyan et al., 2020) for how class means or class structure leads to low-rank phenomena in feature space. This might give the authors a link between their results and the class structure that emerges naturally during training in neural collapse. - Yang’s Tensor Programs (Yang, 2019–2021) might be relevant because it also addresses expansions of NTK under more general architectures. Although the paper’s setting is quite different, referencing it could highlight synergy between the two lines of theory. Other Strengths And Weaknesses: Strengths: - Novelty: Detailed spiked eigenvalue analysis for CK/NTK in multi-layer random networks with GMM data is new and addresses a gap in the literature. - Technical Rigor: The paper’s theorems are grounded in well-established RMT methods (deterministic equivalents, rank-1 updates). - Clarity & Focus: the statements of main theorems and the structured proofs in the appendix are well written Weaknesses: - Data Model & Assumptions: Results rely on a GMM with somewhat idealized settings (independence, certain covariance scaling). If this is truly about initialization and the input is taken to be the real input, in reality, large-scale datasets do not always behave like mixtures of Gaussians. For example, the structure of the NTK and Conjugate kernels as revealed by lemma 2.4 to be a mere low-rank perturbation of the input at any depth, seems overly simplistic of the way that real networks shape and transform input distributions. - Training: While the paper makes brief remarks on training, given that one of the key findings of the paper is on NTK, more exploration and discussion of training dynamics (though it is acknowledged as outside scope) would probably add some value Other Comments Or Suggestions: - As mentioned earlier, some discussion around neural collapse and feature learning literature would add value for some readers. Considering inputs to be clustered is a highly idealized and unrealistic initialization (as devil's advocate would ask: why would we need a neural network if the input is clustered?). Thus, making some connection to neural collapse would make the results more interesting for some readers. - The paper celebrates its key finding as finding connection between outlier eigenvalues and the activation itself, via the recurrent equations defined over moments of the activation. But they do not explore differences of activations from the point of view of their theory. For example in Figure 4 a& b for poly and ReLU activation look largely & qualitatively the same, which seems to underserve the key focus of this paper, which is uncovering role of activation on the magnitude and position of eigenvalue outliers. My suggestion is that authors try to draw some implications from their theory about the activations that are not currently known, and show that that empirically holds. Many future readers may find these practical insights to be be the key takeaways of this paper, and without them, they will be left with no takeaways. Questions For Authors: - My most important lingering question is, most classical results in free probability either work for independent features, or if they are dependent but follow Gaussian distributuion , so their dependence is captured by the covariance. However, once we pass them through one layer of non-linear activations, neither of which is true. How can we get the eigenvalue distribution of the conjugate and tangent kernel here? Is the key the class-structure in the inputs and the resulting low-rank perturbation that enables the rest of the analysis? - The follow-up to the previous question is could your finite-rank spike analysis be extended to more general distributions than the GMM? For example, what if instead of class-structure, we have an input whose eigenspectrum of covariance is explicitly given to us? Regardless of the spiked eigenvalues structure, Can a similar analysis reveal the bulk eigenspectrum conjugate and tangent kernel ? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for his/her encouraging feedback. The recognition of our work's novelty, technical rigor, and clarity is highly motivating. We also greatly appreciate the insightful comments provided. In the following, we provide point-by-point responses (marked with $\Huge{\cdot}$). The statements of comments have been condensed due to character limitations. > The paper already cites much of the very relevant literature. But it ... two lines of theory. * We thank the reviewer for sharing these references, which will be incorporated into the revised version. > Data Model & Assumptions: ... distributions. * GMMs are widely used in machine learning analysis due to the technical convenience. In large-scale neural network models, mixture model behave asymptotically as if GMMs (Seddik el al. (2020, ICML); Couillet and Liao (2022, Cambridge University Press)). In fact, from the perspective of random matrix theory, many properties of Gaussian random matrices can be extended to random matrices with general distributions under certain moment conditions—a phenomenon known as universality (Ding and Yang (2018, AOAP)). To support our findings, we perform an experiment on the MNIST dataset. The visualization of the spectrum and eigenvectors of the CK matrix can be accessed via the following link: <https://anonymous.4open.science/r/eig5492/README.md>. The spectrum reveals four isolated eigenvalues. Notably, the eigenvector corresponding to the largest eigenvalue contains group features, whereas the remaining three appear to lack such information. These empirical observations are consistent with our theoretical results presented in Theorem 3.4 through Remark 3.6. > Training: While ... some value. * We appreciate the suggestion to discuss training dynamics, and while we agree it would be valuable, space constraints prevent us from including it here. Instead, we kindly refer you to our detailed response to Reviewer Hb59’s second question, where this topic is addressed in detail. > As mentioned earlier, some discussion around neural collapse ... readers. * We acknowledge the importance of discussing the neural collapse phenomenon in relation to our study, as both lines of work consider input data with inherent cluster structures. Neural collapse illustrates that when training error reaches zero, classifiers tend to converge towards class means. In contrast, our research demonstrates that even without training, certain eigenvectors of kernel matrices (NTK and CK) derived from neural networks can encode group features within isolated eigenvalues. We will provide discussions on neural collapse and feature learning in the revised version. > The paper celebrates its key finding ... takeaways. * According to Lemma 2.4, the bulk eigenvalue distributions of the CK matrix exhibit similar curves across different activations, differing only by a constant scaling factor. However, Figure 4(b) shows an isolated eigenvalue for the ReLU activation, whereas Figure 4(a) does not display this phenomenon for the polynomial activation. This observation aligns with our findings on the influence of activations on the emergence of isolated eigenvalues. In the revised version, Section 3.3 will clearly elaborate on this point to highlight the effect of different activations. > My most important lingering question ... analysis? * Firstly, we would like to clarify that our analysis is based on the asymptotic spectral equivalence of kernel matrices, as established by Lemmas 2.4 and 2.5 in our paper. This equivalence allows us to operate within a low-rank perturbation framework, thereby circumventing the technical complexities arising from dependencies introduced by nonlinear activation functions. Consequently, our results are derived using standard random matrix theory tools, without the need for free probability theory techniques. > The follow-up to the previous question ... kernel? * Our theoretical results hold for the GMM model. If the eigenspectrum of the covariance matrix is explicitly given without any class structure, it simplifies to a special case of GMM with only one group. Consequently, the asymptotic spectral equivalence holds for the CK and NTK matrices, as stated in Lemmas 2.4 and 2.5. This reduction allows us to focus on investigating the isolated eigenvalue and eigenvector of matrices of the form: $$X^TX+a\mathcal{1}\mathcal{1}^T+b\psi\psi^T+c\mathbf{I},$$ where $\mathcal{1}$ and $\psi$ are two vectors. Under certain conditions on $(a\mathcal{1}\mathcal{1}^T+b\psi\psi^T)$, isolated eigenvalues may emerge. Regarding bulk eigenvalues and eigenvectors, the limiting spectral distribution is provided in Lemma 2.6. Further theoretical analysis, such as the rigidity of the bulk eigenvalues and the delocalization of eigenvectors, are beyond the scope of this paper. However, we believe that it is feasible to establish the local behavior of the bulk spectrum if the local laws for CK and NTK matrices can be derived.
Summary: This paper studies outlier eigenvalues and eigenvectors of conjugate and neural tangent kernels for multi-layer fully connected neural networks at random initialization. The dataset can be a general Gaussian mixture model. This result shows how the information of the group features in the dataset propagates through the multiple layers via the outlier eigenvalues and corresponding eigenvectors. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: No, this paper doesn't propose any method or evaluation criteria. Theoretical Claims: Yes, I checked the correctness of the proofs for theoretical claims. The results are basically based on previous works, e.g.Benaych Georges & Couillet, 2016. Experimental Designs Or Analyses: Yes, the numerical experiments of the paper validate the results of its theoretical analysis. Supplementary Material: No, this paper doesn't provide any supplementary material. Relation To Broader Scientific Literature: This paper uses random matrix theory to understand the extreme eigenvalues and eigenvectors of empirical kernels in random neural networks. Essential References Not Discussed: There are more references that should be mentioned or discussed in the paper: 1. Ba et al 2023 (https://openreview.net/forum?id=HlIAoCHDWW) studied the Gaussian data with one spiked direction in the population covariance matrix and used the spike in conjugate kernel to study the feature learning. 2. There are several papers that considered a similar Gaussian mixture setting and random neural networks: Liao, Couillet 2018 https://proceedings.mlr.press/v80/liao18a/liao18a.pdf, Couillet, Liao, Mai 2018 https://ieeexplore.ieee.org/document/8553034, Liao, Couillet 2019 https://ieeexplore.ieee.org/document/9022455, Ali, Liao, Couillet 2020 https://openreview.net/forum?id=qwULHx9zld. It would be better to have some discussions on these papers. 3. Liao, Couillet, and Mahoney 2020 (https://papers.nips.cc/paper/2020/hash/a03fa30821986dff10fc66647c84c9c3-Abstract.html) studied the random Fourier feature model and the random feature regression model in this case. Other Strengths And Weaknesses: ## Strengths: This presentation is clear and well-organized. The mathematical results are clearly presented. ## Weaknesses: 1. Wang et al., 2024 in the reference have studied the propagation of the spike eigenvalues and eigenvectors for conjugate kernel matrix through multiple layers. I do not know the novelty of this paper comparing with Wang et al., 2024. Wang et al., 2024 also have similar simulation for gaussian mixture dataset. There should be a clear comparison with this paper. Do the results of Wang et al., 2024 totally cover the results of CK in the current paper? 2. Although the mathematical results presented in Section 3 are clearly presented, there is a lack of explanations regarding the statistical implications of these results. Can we derive any practical applications or insights for the machine learning community from these findings? Even some case studies or remarks should be better included in the main text. 3. A comprehensive literature review should be included to provide context and establish the relevance of the work. Additionally, a discussion should be presented outlining the limitations of the current study. Other Comments Or Suggestions: See weaknesses. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for acknowledging our paper as well-organized and clearly presented, and we appreciate the constructive comments. Below, we provide point-by-point responses (marked with $\Huge{\cdot}$). The statements of weaknesses have been condensed to comply with character limitations. > Wang et al., 2024 in the reference ... paper? * We thank the reviewer for this important comment. The pioneering work by Wang et al.(2024) offers elegant results on the spiked eigenvalues and eigenvectors of CK. To clarify the differences between Wang et al.(2024) and our study, we provide the following comparisons. 1. We allow for different population covariance matrices $C_a$ in GMM, without relying on the $\tau_n-$orthonormal assumption introduced in Wang et al. (2024). Specifically, this assumption requires the difference between $tr(C_a)$’s to be $o(p^{2/3})$, whereas our study only requires this difference to be $O(p)$. 2. We do not require $X^TX$ to contain isolated eigenvalues, as specified in Assumption 2 of Wang et al.(2024). Instead, the isolated eigenvalues of CK and NTK can arise from the group features of input data, i.e., through the terms $VA_{\ell}V^T$ and $VB_{\ell}V^T$ in our equations (5) and (8). 3. They assume isolated eigenvalues are simple, focusing on single eigenvectors in their main theorems. In contrast, our work accommodates isolated eigenvalues with multiplicities greater than one and analyzes the corresponding eigenspaces. 4. The two studies adopt different assumptions regarding the activation functions. Their work employs weaker conditions on the moments of the derivatives, whereas ours imposes weaker constraints on the expectations of the first and second derivatives. 5. Given NTK's ability to approximate the generalization and dynamics of neural networks (NNs) on real data, our work explores the properties of NTK which are not addressed in Wang et al.(2024). Additionally, we would like to point out a typo: the condition $trC_a^o=O(\sqrt{p})$ in Assumption 2.1 is unnecessary. This correction, along with the aforementioned comparisons, will be included in the revised version. > Although the mathematical ... text? * For NTK: Let $\boldsymbol{y}$ represent the label vector of the data. When the loss function $L(t)=\|\|\boldsymbol{y}-\hat{\boldsymbol{y}}(t) \|\|^2 $, where $\hat{\boldsymbol{y}}(t)$ denotes the prediction at time $t$, the time evolution of the residual is approximately given by the following ODE: $$\frac{d}{dt} \hat{\boldsymbol{y}} (t)= \boldsymbol{K}_{\text{NTK}} \big(\boldsymbol{y} -\hat{\boldsymbol{y}} (t)).$$ This indicates that NTK captures the dynamics of the training process in ultra-wide NNs. We draw the following statistical implications: 1. Based on our theoretical results from Theorem 3.7 to Remark 3.9, the first-order limits of entries in the isolated eigenvectors may or may not contain group features. 2. When the eigenspace associated with the largest isolated eigenvalue contains group features, NNs tend to prioritize learning this information. Conversely, if this eigenspace lacks group features, NNs instead prioritize learning irrelevant information, diverting attention away from effective group features. 3. For non-isolated eigenspaces, we conjecture that their first-order limits contain significantly fewer group features, as observed in classical random matrices like the sample covariance matrix, where the entries of non-spiked eigenvectors are delocalized under mild conditions. Consequently, after effectively learning the group features captured by eigenspaces corresponding to isolated eigenvalues, NNs primarily shift their focus to learning from eigenspaces that lack first-order group features. In GMM, this implies that NNs begin leveraging more noise to learn $\boldsymbol{y}$, leading to overfitting. This potentially provides a perspective based on NTK to understand the phenomenon of overfitting. We will include this discussion in the revised paper. * For CK: We conducted an experiment on the MNIST data. A histogram showing the spectrum of the third layer's CK matrix and line plots of the eigenvectors for the top 5 eigenvalues can be found at the following link: <https://anonymous.4open.science/r/eig5492/README.md>. The spectrum shows four isolated eigenvalues, with the eigenvector associated with the largest eigenvalue being informative, while the other three appear non-informative. These observations align with our theoretical findings from Theorem 3.4 to Remark 3.6. > A comprehensive literature ... study. * We will add all works mentioned by the reviewer, as well as additional relevant studies, including Yang and Salman (2019,arxiv), Bai and Lee(2020,ICLR), Hron et al.(2020,ICML), Papyan et al.(2020,PNAS), Yang(2020,arxiv), Wang et al.(2023,NIPS), Dandi et al.(2024,arxiv), and Engel et al.(2024,ICLR). Moreover, we will discuss potential extensions such as relaxing the distribution assumption and considering other architectures.
null
null
null
null
null
null
Smoothed Normalization for Efficient Distributed Private Optimization
Reject
Summary: This work focuses on differential privacy in the federated learning. It mentions clipping-based DP-FL optimization like DP-SGD is hard to converge due to clipping bias, especially for non-convex, smooth problems. Instead of (adaptive) clipping, this work chooses smoothed normalization to tackle the problem by proposing a method called alpha-NormEC, which combines smoothed normalization and error feedback. Claims And Evidence: This work walks through problem formulation and one de-facto popular solution, DP-SGD, in a good way. By discussing limitations (e.g., less convergence) of related works, this work points out clipping approach does not quite consider clipping bias or introduce too many constraints, especially in the federated learning. The necessity of this work is well pronounced based on success of smoothed normalization in single-node DP setting and good-but-can-be-better error feedback methods. Such combination can lead a good result to tackle the convergence of non-convex and smooth functions. Methods And Evaluation Criteria: The methods in this work is more like an incremental approach since it is highly relied on the existing mechanism like EF21. But the idea to introduce smoothed normalization is a great to make the mechanism converge faster even with bias in the distributed (federated) learning. The evaluation on image classification task is a good criteria to measure performance of DP-related problems. Theoretical Claims: I carefully walked through the Theorems and proofs. To my best knowledge, those claims are convinced to me. Experimental Designs Or Analyses: Although image classification is a good task to conduct experiments, the CIFAR-10 dataset is pretty small and quite outdated in 2025. Also considering current LLM trending, ResNet20 is also a less convincing architecture to work on. The scalability is a very important aspect to make the work more realistic and useful than the prototype level. Supplementary Material: I have read the Contents 3.4, B and H in details. Relation To Broader Scientific Literature: 1. This work points out neglect of clipping bias in the current DP-SGD-related works, which are important for private optimization (not just FL) to converge. 2. Although smoothed normalization is a promising approach to tackle the issue, the cited work Yang et al. (2022) looks plausible and is rejected by TMLR last year, which contains fatal errors (For more info, take a look this link https://openreview.net/forum?id=wLg9JrwFvL). Probably that paper is not a good work as a main citation in the introduction. 3. Bu et al. (2024) is a great work to cite and is highly related to this work. That work provides convinced results on smoothed normalization for non-federated case. However, the assumptions are restricter than this work and is not trivial to extend to federated learning. 4. This work also well studied advantages and limitations of distributed/single-node non-private methods related to clipping. 5. Error feedback as a main course in this work is also well-surveyed. Essential References Not Discussed: In the beginning of the paper, it said “no DP distributed method for smooth non-convex optimization problems”. However, to my knowledge, "DIFF2” paper cited in this work (published on ICML 2023) has discussed the problem for federated learning and provide a feasible framework. Can you explain more on the difference between DIFF2 paper and yours? Other Strengths And Weaknesses: Strengths: 1. Hyperparameters to tune the alpha-NormEC are easier to implement and faster to convergence for non-convex and smooth functions than prior works. And it requires no additional restrictions. 2. DP version of alpha-NormEC guarantees convergence even considering DP-related bias. Weakness: 1. Besides dataset and model architecture, this work put too much experimental details to the appendix. For example, comparison to existing work like Clip21 is a good evidence to convince audience on the effectiveness of this work instead of only reading theoretical parts. 2. This work just discusses the difference with pure clipping approach in the text rather than conducting any comparison experiments (not even mentioned in the appendix). Other Comments Or Suggestions: 1. In the preliminaries, 3.3 discussed too much details in DP-SGG. The audience of this paper could DP-related researcher on ML, so DP-SGD probably is a default knowledge. I suggest to move details of DP-SGD in the appendix and mainly focuses on limitations. 2. As discussed in the experimental design section, decode-only (e.g., transformer) models may be better than ResNet20 to produce convincing results to LLM-era researchers. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer LMxv, We appreciate your time, effort, and thoughtful feedback. We thank you for your appreciation towards the contributions of our work on leveraging smoothed normalization and error feedback to design distributed algorithms with the first provable convergence under the privacy budget. > In the beginning of the paper, it said “no DP distributed method for smooth non-convex optimization problems”. However, to my knowledge, "DIFF2” paper cited in this work (published on ICML 2023) has discussed the problem for federated learning and provide a feasible framework. Can you explain more on the difference between DIFF2 paper and yours? DIFF2 assumes both smoothness of objective functions and bounded gradient conditions to derive the convergence. Unlike DIFF2, DP-$\alpha$-NormEC achieves the utility guarantee under smoothness without bounded gradient conditions. Next, on the one hand, DIFF2 uses local gradient differences $\nabla f_i(x^{k}) - \nabla f_i(x^{k-1})$, and, thus, requires computing two gradients at different points at every iteration. On the other hand, $\alpha$-NormEC privatizes the difference between the local gradient and memory vector $\nabla f_i(x^{k}) - g_i^{k}$. Moreover, while DIFF2 adds the private noise on the server, $\alpha$-NormEC adds the noise to the updates from the clients before being communicated to the server. > Although smoothed normalization is a promising approach to tackle the issue, the cited work Yang et al. (2022) looks plausible and is rejected by TMLR last year, which contains fatal errors (For more info, take a look this link https://openreview.net/forum?id=wLg9JrwFvL). Probably that paper is not a good work as a main citation in the introduction. Thank you for pointing us out. We will remove Yang et al. (2022) as a main citation in the introduction section, but keep it in the related work section. > Besides dataset and model architecture, this work put too much experimental details to the appendix. For example, comparison to existing work like Clip21 is a good evidence to convince audience on the effectiveness of this work instead of only reading theoretical parts. We agree with this suggestion. We will move results in the ablation study from the appendix into the main text, such as the results showing that $\alpha$-NormEC provides slightly strong convergence performance than Clip21 in both non-private and private settings. > This work just discusses the difference with pure clipping approach in the text rather than conducting any comparison experiments (not even mentioned in the appendix). In Appendix H.2, we compare DP-SGD and $\alpha$-NormEC. Our results indicate that Error Compensation significantly improves over the baseline. > In the preliminaries, 3.3 discussed too much details in DP-SGG. The audience of this paper could DP-related researcher on ML, so DP-SGD probably is a default knowledge. I suggest to move details of DP-SGD in the appendix and mainly focuses on limitations. Thank you for your suggestion. We will improve Section 3.3 by moving DP-SGD details to the appendix, allowing for a more detailed explanation of its convergence limitations. This revision will be included in the next version. > As discussed in the experimental design section, decode-only (e.g., transformer) models may be better than ResNet20 to produce convincing results to LLM-era researchers. We appreciate your comment. We agree that DP-$\alpha$-NormEC will be of huge interest in private training of LLM models. We will provide additional experiments on training transformer models in the revised manuscript. We hope these clarifications have sufficiently addressed your concerns, providing a clearer understanding of our study's contributions and implications. We are eager to engage in further discussion to resolve any remaining concerns. Please consider the score accordingly. Best Regard, Authors
Summary: This paper proposes a distributed optimization algorithm (called α-NormEC). It uses smoothed normalization with error feedback to solve non-convex, smooth optimization problems in both non-private and differentially private settings. The method claims to achieve provable convergence guarantees without requiring bounded gradient norms. It claims to be the first method to provide convergence guarantees for private training under standard assumptions, addressing the challenges of clipping bias in distributed differentially private optimization. The paper also demonstrates empirical performance on practical tasks. Claims And Evidence: Overall, the paper is fine (the problem is valid, and the solution seems to be working, although I cannot test it directly or replicate it), but there are certain elements that seem to be oversimplified or glossed over, which, if not the case, then are poorly explained. The proposed algo provides convergence guarantees (theoretically), but the baseline assumption is the objective functions' smoothness and bounded from below. But what if these are not in real-world scenarios, like heterogeneous data noisy gradients to begin with, or non-smooth func as in some DL? The paper could have benefited from including a deeper analysis and providing better arguments in this regard. The paper makes a point of highlighting the issues with the gradient clipping technique. But, there is no mention of adaptive or data-driven clipping techniques. Once again, an analysis or comparison of smoothed normalization and the adaptive clipping methods would have clarified this. It would have actually shown if the proposed work is even better than or at par with them. Methods And Evaluation Criteria: Expanding on the same point of verboseness of the paper, the paper could have benefited from better evaluations. Currently, it is very specific to CIFAR-10 using ResNet20. This feels like just proof of concept type work. The paper positions itself that the application of this work is in federated learning; however, it does not add the scalability of communication efficiency. The argument can be that the paper focuses on providing the non-convex solution more as compared to scalability and efficiency; however, then it can be countered that this work is not only for federated learning and can be applied to other domains, hence its relevance to ICML is week. NLP, time series forecasting, reinforcement learning, or something similar could have been used to demonstrate its application better. The paper should provide a more detailed analysis or experiments showing how the method performs as ϵ approaches higher values (e.g., closer to 1 or higher) and the impact of this on both privacy and utility. The paper lacks a clear discussion of the trade-off between privacy loss and model performance Theoretical Claims: Not all of them. The proofs are primarily listed in the appendices (which are longer than the paper itself). The proofs assume smoothness and bounded loss functions, which may not always hold in practical (real-world) non-convex optimization problems. Experimental Designs Or Analyses: The dataset diversity is limited, and hence the question of generalizability arises. Comparisons with other better baselines could also have improved and highlighted the work's impact. Real-world applications will require communication and other computational overheads, hence the feasibility is still a question (although I understand this may not be a huge issue). A more elaborate privacy-utility analysis could be beneficial, especially for extreme privacy settings (financial, medical, etc.) Supplementary Material: I did give the appendices an overview, but not in depth. However, I do feel the paper is imbalanced. The initial parts of the paper are quite abstract and verbose. The paper gives a lengthy literature review, which could have been simplified to make room for some more results that have been shown in the appendices. It feels like the paper is too verbal (even outside the Intro and related works parts). For example, Page 4, right column, 2nd para could have been shorter and to the point; similarly, subsection 3.4 in the same column begins with lengthy premises; page 6, "Proof outline of α-NormEC. We outline the proof for αNormEC" heading and sentence are the same thing. I believe that the supplementary material is unnecessarily long when some of its essential points could have been in the paper itself. Relation To Broader Scientific Literature: For the most part, I am satisfied by the literature review, although it is quite lengthy. However, as mentioned earlier, the adaptive clipping literature and its comparison would be beneficial. Essential References Not Discussed: Note, that it is not sufficient that the paper cites another work, but also that it discusses the the proper context. Andrew et al., "Differentially Private Learning with Adaptive Clipping," NeurIPS 2019: cited, but has to be compared to. AdaCliP: Adaptive Clipping for Private SGD (Pichapati et al., 2019) Gradient Sparsification for Efficient Wireless Federated Learning with DP (Wei et al., 2023) Adaptive Gradient Sparsification for Efficient FL (Han et al., 2020) Other Strengths And Weaknesses: Strengths: The paper provides the convergence guarantee for private training without assuming bounded gradient norms. Presents a rigorous mathematical treatment of α-NormEC and its convergence properties. Introduction of smoothed normalization as a substitute for gradient clipping is an interesting idea. α-NormEC can be applied in distributed settings, making it relevant to federated learning. Weakness: Paper does not benchmark against recent advances in adaptive clipping and gradient sparsification. Does not fully analyze how privacy noise affects training under different privacy budgets (ϵ, δ). Weak empirical evaluation. No Discussion of Momentum and Adaptive Learning Rates in DP-SGD. Other Comments Or Suggestions: A symbol table would have been beneficial, given the extensive appendices. Typo P1: To enfore DP Questions For Authors: 1- α-NormEC performance at low privacy budgets? 2- How does α-NormEC compare to adaptive clipping? 3- Elaborate on computational and communication costs. 4- Gradient sparsification as a comparison, and α-NormEC applied to non-iid data in FL? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear reviewer yzkj, We appreciate your time, effort, and thoughtful feedback. > The proposed algo provides convergence guarantees (theoretically), but the baseline assumption is the objective functions' smoothness and bounded from below. Our assumptions are standard for analyzing distributed algorithms on non-convex problems, e.g. by Nesterov et al. (2018) and [A]. Furthermore, in the context of differential privacy, we do not impose bounded gradients, which are restrictive but used by prior literature. [A] Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich, S., & Suresh, A. T. (2020, November). Scaffold: Stochastic controlled averaging for federated learning. In International conference on machine learning (pp. 5132-5143). PMLR. ​​>Weak empirical evaluation. Although our main focus is on algorithmic development, our work contains an extensive experimental section that evaluates several algorithms in different settings, including the ablation studies of the design components of our method. Furthermore, our experimental setting is standard in the literature on differential privacy (Papernot et al., 2020; Li and Chi, 2023; Allouah et al., 2024). Can you elaborate? We will be happy to incorporate your constructive suggestions. > 1- α-NormEC performance at low privacy budgets? We ran $\alpha$-NormEC across a range of $\beta$ values in Figure 3, which shows a privacy-utility trade-off. Lower $\beta$ results in smaller added noise, resulting in higher privacy loss and decreased performance of the model. The effect of tuning $\beta$ was observed also for the non-private version of $\alpha$-NormEC (without DP noise). Furthermore, we performed additional comparisons of the methods for the private training, where DP-$\alpha$-NormEC significantly outperforms DP extension of Clip21. Kindly see [the attached plot link](https://postimg.cc/bDhgkq7R). > 2- How does α-NormEC compare to adaptive clipping? > … there is no mention of adaptive or data-driven clipping techniques. We develop distributed algorithms with the first provable convergence without relying on restrictive bounded gradient assumptions. Adaptive clipping is orthogonal to our work, due to smoothed normalization that removes the need to tune the private noise. In Paragraph 3 of introduction, we reviewed adaptive clipping techniques, such as by Andrew et al., (2019), and recent results by Merad & Gaïffas (2023) and Shulgin & Richtárik (2024). Andrew et al., (2019) does not provide any guarantees we can compare to. Shulgin & Richtárik (2024) showed the first theoretical analysis only in the centralized (single node) case, and showed that SGD with Adaptive clipping suffers from a similar non-convergence issue as SGD with constant clipping (Koloskova et al., 2023). Moreover, Pichapati et al., (2019) considers coordinate-wise adaptive clipping for AdaCliP that is orthogonal to our work, and presents its analysis of the single-node case under the bounded gradient assumption, the condition our distributed algorithms do not require >3- Elaborate on computational and communication costs. At every iteration, each client computes the local gradient, updates local memory $g_i^{k+1}$, and sends the privatized difference between the gradient and memory vector to the server. Then, the server performs averaging and sends the updated model back to the clients. There is no additional communication or computational overhead compared to distributed SGD. >4- Gradient sparsification as a comparison, and α-NormEC applied to non-iid data in FL? Compression (Sparsification) techniques are complementary to our work, as compression does not preserve privacy. Sparsification does not guarantee bounded sensitivity and has to be combined with clipping or normalization for differential privacy. We believe $\alpha$-NormEC can be combined with compression to further improve communication efficiency while it preserves privacy. $\alpha$-NormEC with compression can be used to train over arbitrarily heterogeneous and non-iid data, thanks to error compensation mechanisms similar to EF21 that remove the uniformly bounded heterogeneity assumptions. > No Discussion of Momentum and Adaptive Learning Rates in DP-SGD. We will add the discussion on techniques like momentum and step size DP-SGD in the revision as they can be beneficial for empirical performance. Error Compensation can be viewed as a variation of server momentum equivalent to classical heavy-ball momentum up to reparametrization [B] . [B] Garrigos, Guillaume, and Robert M. Gower. "Handbook of convergence theorems for (stochastic) gradient methods." arXiv preprint arXiv:2301.11235 (2023). Next, thank you for the typos and suggestions. We will reorganize the initial part of the paper and shorten the literature review to include more results in the main part. We hope these clarifications have addressed your concerns. Please consider the score accordingly. Best Regard, Authors
Summary: This paper studies federated learning with gradient clipping in the non-private and private settings. In the non-private setting, their algorithm matches existing results for clipped methods. In the private setting, to my knowledge, their convergence results are new. Claims And Evidence: Their authors claim their method matches existing theoretical convergence guarantees and empirically is more practical. They provide proofs for the theoretical claims and experiments to back up the empirical performance of the algorithm. One argument they make against prior clipped SGD methods is high sensitivity to the clipping norm, whereas their method is less sensitive to the (new) hyper-parameters. They provide experiments to justify this. Methods And Evaluation Criteria: The method is evaluated through a formal convergence rate and experiments. Theoretical Claims: The privacy and convergence proofs look correct from my reading. Experimental Designs Or Analyses: The design of the experiments which are included makes sense. Supplementary Material: I reviewed the convergence proofs. Relation To Broader Scientific Literature: This works builds on previous work that has studied clipping in both federated/non-federated and private/non-private settings. In particular, the method the authors propose seems to be a careful combination of previous techniques, error feedback and smoothed normalization. To be honest, this feels more like a federated learning paper than a distributed learning paper. At every iteration, each client is computing a gradient at the same globally known point, x^k. If the communication to do this is happening, why would existing federated methods not readily apply? Essential References Not Discussed: I think the literature on non-federated non-convex optimization could be discussed more. For example, the idea of using gradient differences in smooth non-convex optimization goes back to at least [1]. I understand there is nuance in the distributed setting, but it seems to me this nuance is lost since each client is computing its gradient at the same local iterate, x^k, each round. [1]: Spider: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator Other Strengths And Weaknesses: The presentation of the paper is very nice, and makes for easy reading. The authors also do a great job describing clearly the shortcomings of previous work and their contributions. I think the experiments do a good job of addressing obvious follow up questions that would arise for proposing a method like this. With that said, the spirit of DP SGD with clipping is to avoid assuming regularity conditions on the loss. It's true that the authors avoid Lipschitzness, but they do so only at the expense of imposing smoothness, which is arguably even harder to come by in modern methods. Other Comments Or Suggestions: At line 267, the authors remark that the Clip21 algorithm from Clip21 algorithm achieves 1/K, I assume they mean 1/sqrt{K} as per the discussion in the Appendix? Questions For Authors: How is the initial gradient being made private? It is strange that it is passed in as a parameter to the algorithm. Relatedly, I have some concern with the premise of the paper. The authors perform all their updates using gradient differences, and also assume the loss is smooth. In the DP setting, why would we ever need to clip if this were the case? The sensitivity of the updates is already bounded by smoothness. For example, one could do something like Algorithm 1 in [1] (but without "resetting" the gradient estimator). Further, the authors claim their result is the first convergence rate for DP distributed learning. But their algorithm involves communicating x^k to each client every round. At that point, it seems DP federated learning is a more fair comparison, for which there are many existing convergence results. [1]: Faster Rates of Convergence to Stationary Points in Differentially Private Optimization Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer BGLy, We appreciate your time, effort, and thoughtful feedback. > At every iteration, each client is computing a gradient at the same globally known point, x^k. Could you please specify the federated methods? Because federated learning algorithms exchange the local model updates, not the gradients as required by $\alpha$-NormEC. However, $\alpha$-NormEC can be modified to FedAvg algorithms. The generalization of our methods to federated settings presents a challenging direction, as it requires modifications of our current analysis. > At line 267, the authors remark that the Clip21 algorithm from Clip21 algorithm achieves 1/K, I assume they mean 1/sqrt{K} as per the discussion in the Appendix? We apologize for this misunderstanding. At line 267, Clip21 attains the $\mathcal{O}(1/K)$ convergence of the **squared** gradient norm $\|| \nabla f(x) \||^2$. This statement is equivalent to the $\mathcal{O}(1/\sqrt{K})$ convergence in the gradient norm of Clip21 in Appendix E. We will revise the discussion at Line 267. > How is the initial gradient being made private? **Privatization of the difference between the local gradient and memory:** As the initial gradient $\nabla f_i(x^0)$ is not shared with the server, it is not needed to be privatized. Only the **difference** between local gradient and memory vector at the initialization $\nabla f_i(x^0) - g_i^0$ is privatized and sent to the server with the gaussian mechanism. **Initialization of the memory vectors:** The memory vector $\hat{g}^0$ on the server is initialized as $\frac{1}{n} \sum_{i=1}^n g_i^0$ (Line 948). Our analysis from Lemma 3 allows easy extension to arbitrary initialization,e.g. $\hat{g}^0 = \frac{1}{n}\sum_{i=1}^n g_i^0 + e$. Here, this additional error term $e$ can be small if we privately estimate the mean of vectors $g_i^0$, which incur once and thus a small privacy loss (compared to the iterative process). Furthermore, secure aggregation techniques can eliminate this error entirely. For instance, if clients agree on a shared random seed, they can add and subtract cryptographic noise to their local memory vectors, respectively, without affecting the average. Consider two clients with local memory vectors $g_1^0$ and $g_2^0$. The first client can add cryptographic noise $h$ to $g_1^0$ and the second client subtract $h$ from $g_2^0$. This would protect the vectors $g_i^0$ from the server but the average would be exactly the average as $\frac{1}{2}(g_1^0 + h) + \frac{1}{2}(g_2^0 - h) = \frac{1}{2}(g_1^0 + g_2^0)$. > Relatedly, I have some concern with the premise of the paper. The authors perform all their updates using gradient differences, and also assume the loss is smooth. In the DP setting, why would we ever need to clip if this were the case? The sensitivity of the updates is already bounded by smoothness. We assume smoothness of the loss function $\||\nabla f(x)-\nabla f(y)\|| \leq L\||x-y\||$, the most standard assumption in non-convex optimization literature. It does not imply bounded sensitivity of the gradient $\||\nabla f(x)\||$ unless the domain is bounded ($\||x-y\|| \leq \mathcal{D}$) which is a restrictive condition. Existing literature considers the convergence of DP distributed algorithms for minimizing smooth losses by further imposing bounded gradient conditions. This restricts the class of optimization problems. These bounded gradients can be achieved by assuming either Lipschitz continuous functions (as in [1]) or bounded domain. These conditions allow us to bound the sensitivity of the updates without applying clipping or normalization. However, this sensitivity is impossible to compute for many loss functions used in training machine learning models. Even when it can be estimated, its bound is often overly pessimistic, thus leading to excessively large DP noise and thus significantly degrading the algorithmic convergence performance.” > Further, the authors claim their result is the first convergence rate for DP distributed learning. But their algorithm involves communicating x^k to each client every round. We believe there is a misunderstanding. Our algorithms communicate the normalized gradient difference $N_{\alpha}(\nabla f_i(x^k)-g_i^k)$, not the iterates $x^k$. Furthermore, $\alpha$-NormEC attains convergence under privacy budget for minimizing smooth functions without assuming bounded gradients and/or ignoring the effect of normalization, which existing literature often requires. To our knowledge, only Das et al. (2022) and Li et al. (2024) analyze DP federated methods without restrictive bounded gradient assumption. However, these algorithms (Algorithm 1 of Das et al. (2022)) with one local step (E=1) result in DP distributed clipped gradient methods. These algorithms using clipping/normalization even without the DP noise do not converge for simple examples (see Section 3.4). To fix the convergence issue, we leverage error feedback. Best regards, Authors --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. To follow up on the point about the gradient updates, my point was that you don't need to bound the norm of the individual gradients to bound the sensitivity of the gradient difference estimate. If you estimated gradient differences using $\nabla f_i(x^k) - \nabla f_i(x^{k-1})$, the sensitivity of this vector is bounded via smoothness. Is there a good reason not to do this in the setting you consider? --- Reply to Comment 1.1.1: Comment: Dear Reviewer BGLy, Thank you for the follow-up question. We appreciate the opportunity to clarify the design choices in `𝛼-NormEC`. In `𝛼-NormEC`, each client does **not** compute the gradient difference $\nabla f_i(x^k)-\nabla f_i(x^{k-1})$. Instead, it computes only **one** local gradient $\nabla f_i$ at point $x^k$, and the difference between the gradient and memory vector $g_i^k$: $\nabla f_i(x^k)-g_i^k$ due to the use of Error Feedback (EF21) framework: 1. **EF21 Mechanism:** `𝛼-NormEC` leverages EF21 mechanism [Richtárik et al., 2021] to mitigate bias from normalization/clipping. EF21 achieves this by operating on the error-corrected gradient $\nabla f_i(x^k) - g_i^k$, where $g_i^k$ is local error memory. The term communicated to the server *within this framework* is $\mathrm{Norm}_{\alpha}(\nabla f_i(x^k) - g_i^k)$. 2. **DP Requirement:** To ensure privacy, noise must be added to the *communicated* quantity. Thus, we privatize $\mathrm{Norm}_{\alpha}(\nabla f_i(x^k) - g_i^k)$. 3. **Need for $\mathrm{Norm}_{\alpha}$:** The input norm $||\nabla f_i(x^k) - g_i^k||$ is **not** bounded by smoothness alone, as $g_i^k$ accumulates errors and $||\nabla f_i(x^k)||$ can be large (see point 4). Applying smoothed normalization is crucial *within EF21* to guarantee a bounded sensitivity ($||\mathrm{Norm}_{\alpha}(\cdot)|| \leq 1$) for the communicated term, allowing calibrated DP noise addition. 4. **Smoothness vs. Bounded Sensitivity:** We reiterate that smoothness ($||\nabla f(x) - \nabla f(y)|| \leq L ||x-y||$) does **not** imply bounded gradient norm $||\nabla f(x)||$ or bounded $||\nabla f_i(x^k) - g_i^k||$ over an unbounded domain. For instance, considering $y = x^*$ (a stationary point where $\nabla f(x^*) = 0$), smoothness implies $||\nabla f(x)|| = ||\nabla f(x) - \nabla f(x^*)|| \leq L ||x - x^*||$. This bound can be arbitrarily large if $x$ is far from $x^*$. Bounded sensitivity arises from stronger assumptions like Lipschitz continuity of the *function* ($|f(x) - f(y)| \leq l ||x-y||$, implying $||\nabla f(x)|| \leq l$) or a bounded domain. However, assuming Lipschitz continuity restricts the problem class (e.g., excluding quadratic loss) and the constant $l$ is often unknown, making DP noise calibration impractical. Our approach avoids these stronger assumptions by using $\mathrm{Norm}_{\alpha}$ under only standard smoothness. 5. **Alternative Approach:** Privatizing $\nabla f_i(x^k) - \nabla f_i(x^{k-1})$ represents a fundamentally different algorithm that *does not use EF21's bias correction*. Analyzing it would require a separate framework and potentially face different challenges regarding bias accumulation under only smoothness. We hope this clarifies our rationale. Thank you again for your constructive engagement.
Summary: Clipping the gradients is a common practice in differentially private training with DP SGD and a common technique used to analyze the privacy-utility trade-off of DP-SGD. However, as the authors correctly point out, most theoretical works ignore the effect clipping can have on convergence by assuming bounded gradients and ignoring the effect of clipping altogether. Smoothed normalization is a recent technique that offers a more amenable analysis of DP-SGD without requiring stringent restrictions on the gradients of the clients. This paper shows that smoothed normalization leads to a contractive property on the gradient, making it amenable to the error feedback framework. The paper then analyzes the distributed DP-SGD with smoothed normalization and error-feedback, offering convergence guarantees that illustrate the algorithm privacy-utility trade-off. In doing so, the paper offers an analysis for private smooth non-convex optimization without any bounded gradient assumptions. The paper also offers some experiments comparing their algorithm to other algorithms and testing its sensitivity to hyperparameter tuning. Claims And Evidence: No. Please see the questions I ask the authors. The claims made about the convergence guarantees in the paper are not accurate and require extreme conditions on the initialization (which seems hard to satisfy without initializing at a point that is already close to stationarity). Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. The proofs are correct, albeit the main recursion, which bounds $\|\nabla f_{i}(x^k) - g_i^{k+1}\|$ looks pretty loose. Experimental Designs Or Analyses: Yes, the experiments look alright, but I am not convinced that the method is significantly better than Clip21 based on the experiments in Appendix H.4. I believe these experiments should have been included in the paper, and the fact the empirical difference between the two algorithms not high be discussed as a limitation. Supplementary Material: Yes. I reviewed the proof to understand which parts of the analyses might be loose. Relation To Broader Scientific Literature: The paper tries to fill the gap in the existing literature by optimizing smooth non-convex functions privately without making any restrictive bounded gradient assumptions. It also addresses this in the novel federated/distributed setting. These analyses require fewer assumptions than existing works but have some significant limitations that I point out below, which require restrictive initialization assumptions. Essential References Not Discussed: I can't think of any critical references the authors missed. Other Strengths And Weaknesses: The paper is written and identifies a gap in existing literature on private optimization accurately. The proofs are easy to follow and verify as well. The biggest weakness of the paper is that the final convergence guarantees are too weak, without making very restrictive assumptions about the initialization. I discuss this extensively below in questions for authors. Owing to these limitations, I can not recommend accepting the paper and urge the authors to revisit their analyses to identify any loose inequalities. Other Comments Or Suggestions: **1.** The second $L$ should be $L_i$ on line 140 in Assumption 1. **2.** I am not sure I understand the following comment on lines 184 - 186: > Thirdly, the condition in (4) is “pathological” in the distributed setting as it restricts the heterogeneity between different clients and can result in vacuous bounds Why would restricting the data heterogeneity lead to vacuous bounds? Restricting the data heterogeneity should improve the convergence guarantees of at least local update methods. Furthermore, the discrepancy between client gradients could be much smaller than $\phi$, thus avoiding the pessimistic dependence on an enormous $\phi$ in analyzing the "consensus error" between clients. **3.** Lines 258-266 are written awkwardly and need to be rephrased. **4.** Why do the authors refer to the initialized memory vectors as $g_i^{-1}$ in lines 288 (column 2) and 301 (column 1) instead of $g_i^0$? This seems to be a typo. Questions For Authors: **Q1.** It is not obvious why it is possible to pick $x^0$ and $g_i^0$ in Corollary 1 to ensure that $\|\nabla f_i(x^0) - g_i^0\|$ is very small. For instance, if the clients' gradients are small ($O(1/\sqrt{K})$) to begin with at $x^0$, then we are done. If this is not the case, one could hope to initialize far away from the optimizers on each machine so that all gradients roughly point in the same direction, but then, if the gradient norms are large, it is not clear if one can find an appropriate $g_i^0$, which could make the difference be much smaller than the gradient norm itself. Could the authors comment on the feasibility of this? To me, it seems like the following inequality in the proof sketch is very loose when we are close to a stationary point, $$\|\nabla f_i(x^k) - g_i^{k+1}\| \leq \max_{i\in[n]}\|\nabla f_i(x^0) - g_i^{0}\|.$$ This constraint in the guarantee almost feels like "cheating." **Q2.** The criticism of Clip21 in requiring the initial function sub-optimality to tune its step size seems unreasonable. For instance, the choice of $\beta$ after Corollary 1 also depends on the same function sub-optimality. The authors should remove this statement, unless I am missing something? I also disagree with the argument in Appendix E, which essentially relies on making D very small to improve over Clip21. As mentioned above, D can not be made arbitrarily small, at least not without further assumptions about the problem. **Q3.** The issue raised in Q2 is further exacerbated in Corollary 2. In particular, let's say we want to get to $\epsilon_{err}$ stationarity guarantee. Then, we need to set $R = O(\epsilon_{err})$ (which I argued is already hard). Furthermore, to make the first term small, we need to ensure that $$\frac{L_{max}\alpha (f(x^0) - f^{inf})^2 d^{1/2} \log^{1/2}(1/\delta)}{n^{1/2}\epsilon_{err}^3} \leq \epsilon .$$ With a moderate error rate $\epsilon_{err} = 10^{-2}$, $n=100$ machines and with only $d=100$ dimensions (which is very modest), and $\delta = 10^{-2}$ this implies $\epsilon > 10^6 L_{max}\alpha (f(x^0) - f^{inf})^2$, which is pretty bad. This implies that the privacy-utility trade-off offered by the analyzed algorithm is very poor, and I am not willing to buy that this gives any meaningful privacy guarantee for a reasonable utility. Again, I feel that the presentation and discussion of the results are misleading (hopefully not deliberately). I suspect that, in both analyses, the authors use some very loose inequalities, which causes the final dependence on $R$. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer zAsL, We appreciate your time, effort, and thoughtful feedback. > Q1. It is not obvious why it is possible to pick and in Corollary 1 to ensure that is very small. We would like to clarify your misunderstanding. We can initialize $x^0,g_i^0 \in \mathbb{R}^d$ to ensure that $\|| \nabla f_i(x^0)-g_i^0 \||$ is small. For instance, we can choose $x^0$ to be any vector, and $\nabla f_i(x^0)$ does not need to have a small Euclidean norm. Then, we can set $g_i^0=\nabla f_i(x^0) + e$ where $e = (D/\sqrt{K+1}, 0, \dots, 0)$ with any $D>0$ and any total iteration number $K$, and our condition naturally satisfies $\||\nabla f_i(x^0) - g_i^0 \||= D/\sqrt{K+1}$. We will include this discussion after Corollary 1 into the revised manuscript. > Could the authors comment on the feasibility of this? To me, it seems like the following inequality in the proof sketch is very loose when we are close to a stationary point, We kindly disagree that our proof is loose. Existing convergence analysis of Clip21 cannot be applied to prove its convergence in the **private** setting. This motivates us to redesign distributed algorithms using normalization, instead of clipping, to achieve **the first provable utility guarantee in the private setting under smoothness without bounded gradient conditions**. Therefore, our approach for bounding $\psi^k := \|| \nabla f_i(x^k)-g_i^k \||$ differs significantly from Clip21, which is heavily based on EF21. We derive the induction proof showing **the monotonicity of $\psi^k$**. This novel condition is **less demanding/restrictive than strong contractivity of $\psi^k$**, i.e. $\psi^{k+1} \leq (1-q) \psi^k, \forall q \in (0, 1]$, which Clip21 and EF21 rely on for variance-reduced Lyapunov-based analysis. > the choice of $\beta$ after Corollary 1 also depends on the same function sub-optimality. The authors should remove this statement, unless I am missing something? Since $\|| \nabla f_i(x^0)-g_i^0 \||$ can be made small by choosing $x^0,g_i^0\in\mathbb{R}^d$, we have $\|| \nabla f_i(x^k)-g_i^k \||\leq \max_i \|| \nabla f_i(x^0)-g_i^0 \||$ that can be made small. This inequality, and our step-size rule **do not need the knowledge of the function suboptimality gap**, i.e. $f(x^0)-f(x^\star)$. This is in stark contrast to Clip21, where its step-size rule depends on not only the function suboptimality gap but also $C_1=\max _{i \in[1, n]} \|| \nabla f_i(x^0)\||$. Unlike $\|| \nabla f_i(x^0)-g_i^0 \||$ in $\alpha$-NormEC, $C_1$ in Clip21 cannot be made small easily. > This implies that the privacy-utility trade-off offered by the analyzed algorithm is very poor Thank you for pointing out our impreciseness. We agree that in the private setting, $R$ cannot be made arbitrarily small, e.g. R goes to zero. We will remove the discussion after Corollary 2, which states that as $R\rightarrow 0$, $\alpha$-NormEC achieves the utility bound of $\mathcal{O}\left( \Delta \sqrt[4]{ \frac{d\log(1/\delta)}{n\epsilon^2} } \right).$ > Yes, the experiments look alright, but I am not convinced that the method is significantly better than Clip21 based on the experiments in Appendix H.4. I believe these experiments should have been included in the paper, and the fact the empirical difference between the two algorithms not high be discussed as a limitation. We kindly disagree as our main focus is on private training. First, our method has proved convergence guarantees unlike Clip21 in the private training. Second, we do not claim that our method outperforms Clip21 in the non-private setting; however, Table 3 shows that $\alpha$-NormEC achieves slightly higher accuracy than Clip21 for most values of $\beta \in \{0.01, 0.1, 1\}$. We performed additional comparisons in the private training, where DP-$\alpha$-NormEC significantly outperforms DP extension of Clip21 (which does not have convergence guarantees in the private setting). Kindly see [the attached plot link](https://postimg.cc/bDhgkq7R). > Why would restricting the data heterogeneity lead to vacuous bounds? Bounded heterogeneity $\delta:=\|| \nabla f_i(x) - \nabla f_j(x)\||$ is an unrealistic condition, as heterogeneity can be arbitrarily large in practice. As $\delta$ grows, any convergence bounds that depend on $\delta$ can be very loose (**vacuous bounds**). This implies that the corresponding algorithms do not converge. Furthermore,the bounded gradient assumption implies bounded heterogeneity. We kindly refer to Khaled et al. (2020) for the discussions in more detail. Furthermore, thank you for spotting the typos. We will fix them in the revision. We hope these clarifications have sufficiently addressed your concerns, providing a clearer understanding of our study's contributions and implications. We are eager to engage in further discussion to resolve any remaining concerns. Please consider the score accordingly. Best regards, Authors
null
null
null
null
null
null
Causal Invariance-aware Augmentation for Brain Graph Contrastive Learning
Accept (poster)
Summary: This paper proposes a Causally Invariance-aware Augmentation for brain Graph Contrastive Learning(CIA-GCL)which addresses distribution shifts in brain graph datasets by using causal decoupling with invariant learning to identify the invariant subgraphs and designing an invariance-aware augmentation strategy for enhanced brain disease classification and interpretability.The experiments on three real-world brain disease datasets show that CIA-GCL achieves state-of-the-art performance, generalizes well to multi-site brain datasets, and offers interpretability. ## update after rebuttal I appreciate the authors' efforts in providing these additional results through anonymous links. My concerns have been addressed, and the findings appear well-justified. I find that the authors have thoroughly justified their methodological choices and have adapted them in a manner that is both appropriate and impressive for a high-quality applied research paper. After carefully considering the other reviewers' comments and the authors' responses, I agreed to accept the paper. Claims And Evidence: Yes, the article cites relevant literature to support the proposed ideas and provides corresponding theoretical proofs for the two proposed theorems. Methods And Evaluation Criteria: Yes,the datasets selected of the article are three commonly used datasets in brain network classification tasks, which have certain value for computer-aided diagnosis of brain diseases. And ten-fold cross-validation can more reliably evaluate model performance and fully utilize data to reduce evaluation fluctuations caused by different data partitioning.In addition, this paper selects various evaluation metrics to comprehensively validate the performance of the algorithm. The results of the comparison algorithm are found to be very close to those in the original paper, ensuring high experimental reliability. Theoretical Claims: Yes, regarding Definition 3.4, the author maximizes the mutual information between subgraphs with the same causal factor to ensure that the invariant subgraph contains the maximum amount of information related to Y and can make the most stable predictions for Y. The author considers subgraphs with the same label as a set of subgraphs under the influence of the same causal factor. Experimental Designs Or Analyses: Yes. The experimental setup in section 4.1.2 of this paper is to verify the model's generalization ability for multi-site data. Because changes in data collection equipment and protocols at different sites may introduce biases that affect model generalization. The experimental setup of separating the site data from the training and testing sets in the text is a commonly used setting for recently verifying the generalization ability of methods. Supplementary Material: Yes, mainly the theoretical proof in Appendix C, the comparison of different graph augmentation methods in contrastive learning in the field of brain graph analysis in Appendix B, and the detailed experimental settings such as datasets, baselines in Appendix D. Relation To Broader Scientific Literature: Due to the contrastive learning learns the most discriminative features in graphs in an unsupervised manner and exhibits good generalization, methods based on this approach show potential in brain graph analysis. However, the techniques used in constructing augmented samples may disrupt the local structures in brain graphs that contain important information, thus making them less effective for processing brain graph data. Under the assumption of causality, this paper integrates contrastive learning with invariant learning to address the issues encountered in handling brain graph data. By leveraging causal disentanglement and invariant learning, the method selects subgraph structures containing critical information from brain graphs as anchor graphs. Furthermore, it innovatively designs an augmentation strategy tailored for brain data to enhance the ability to capture these important subgraphs. Essential References Not Discussed: No, in the relevant work summary, the author provides a detailed introduction to the techniques used in the proposed method. Other Strengths And Weaknesses: Strengths: 1. The motivation of the article is clear. On the first page, it analyzes two problems in the existing methods of processing brain graph data, and also compares and analyzes the existing related methods. Then, in the introduction of the method section, these two problems are also reviewed and verified through corresponding experiments. 2. The method proposed in this paper is innovative. It analyzes and hypothesizes the problems existing in the existing brain graph data and proposes a contrastive learning framework for brain graph analysis. Innovations are made in the selection of anchor graphs and the design of enhancement strategies, combining invariant learning and contrastive learning to alleviate the difficulties of brain graph data analysis. 3. The paper has a certain level of theoretical foundation.In the third section, the article proposes a hypothesis and a definition and theorem based on it, and provides a corresponding theoretical proof, making the article more persuasive. 4. The experiment of this paper is relatively rich. Experiments were conducted on three real brain disease datasets, and experiments on multi-atlas data were included. The performance of the proposed method was then verified from multiple aspects, corresponding to the problems of existing methods analyzed in the introduction of this paper. 5. The language of the article is clear and logical. In the third section, the method is introduced step by step, which is easy for readers to understand and makes the structure of the whole article compact. 6. The paper is well-structured, clearly expressed, and easy to understand.The framework diagram of the method described in the article is relatively clear and easy for readers to understand. Weaknesses 1. There are some issues with the layout of some of the content in the article.In the second paragraph of the second page, the author's introduction to invariant learning is a bit confusing. The citation and related textual description are mixed together. And The definition 3.6 on page 3 regarding the three properties of good brain augmented samples is confusingly arranged. 2. In the section 2.2, only relevant literature in the field of invariant learning is listed and introduced, and the shortcomings of existing methods or the relationship between invariant learning technology and brain graph analysis are not analyzed. 3. Among the baselines, only the GATE method is a brain graph analysis method based on contrastive learning. The article mainly focuses on comparing with other technologies, reducing the possibility of the proposed method having better performance. Other Comments Or Suggestions: 1. The text uses many letters but they are confusing. For example, in the lower right corner of page 4, Formula 3 and Formula 5 use \mathbit{p}_{\mathbit{ij}}^{\mathbit{inv}}, but Formula 4 uses \mathbit{p}_{\mathbit{ij}}. On page 5, \mathbit{E}_{\mathbit{Y}_\mathbit{k}}^\mathbit{s}\ and \mathbit{E}_{\mathbit{Y}_{\left(\mathbit{k}\right)}}^\mathbit{s} in formulas 7 and 8. It is recommended that the author reorganize the characters in the text and add a symbol table to facilitate readers' understanding. 2. There are some errors in the article involving symbols, such as, in formula 4 on page 4, \mathbf{\alpha}\ should be \mathbit{r}(the maximum edge proportion). It is recommended that the author sort out the corresponding expressions in this paper. Questions For Authors: 1. In the summary of 3.3, is there any difference between the “invariance-aware graph augmentation strategy”proposed by the author and other common augmentation methods? Which strategy is designed considering the characteristics of brain data? 2. In the “Brain Node Features Representation”summary on page 4, how does the author take brain structure into consideration when designing the node features extraction module module? 3. On page 6, the summary of “invariant loss”says“we treat{\ G}_k^{inv}, {\ G}_k^{a1}, and {\ G}_k^{a2}\ as graph data generated under different environments ”. What is the basis for the author to regard these graph data as different environments in invariant learning? 4. On page 5,is the v that appears in row 246 of the right column the corresponding inv\ ? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are deeply grateful for your insightful and constructive comments. We have carefully addressed each of them as follows. **[Weekness]** **W1:** Thank you for pointing this out, and we sincerely apologize for the formatting issues in the original submission. We have carefully **revised the manuscript to correct the layout problems**. **W2:** Thank you for pointing out this important issue. - Brain graph data often come from multiple sites, and fMRI itself is known to be Low SNR . These factors lead to **limited generalization ability** in existing brain graph models. Invariant learning as studied in the OOD literature, offers a promising direction to address this challenge. - Many existing methods rely **solely on invariance optimization**, which may not be sufficient to capture the most informative and robust substructures. In contrast, our method **integrates both invariance optimization and explicit representation alignment.** We provide a detailed analysis in **Reviewer bXt3 – [C2]**. **W3:** Thank you for the helpful comment. We have **added comparisons with two additional** contrastive learning-based methods (A-GCL and Contrasformer) **in Table 2 & Figure1**(https://anonymous.4open.science/r/CIA-GCL-Experimental-Results-F8B5/Experimental_Results.md) **[Suggestions]** Thank you for pointing this out. We sincerely appreciate your careful observation. We **have corrected the symbol misuse in this paper**, and carefully reviewed the manuscript to ensure consistency in all symbolic notations. **A summary of the key symbols** has been added in Table 4 for clarity https://anonymous.4open.science/r/CIA-GCL-Experimental-Results-F8B5/Experimental_Results.md **[Questions]** **Q1:** Thank you for your thoughtful question. We provide a detailed comparison of our **invariance-aware graph augmentation strategy** with other common methods in **Appendix B**, highlighting the unique properties of our augmented views. - **Property a(invariant)** is specifically designed for brain network data. We use **full-length fMRI data** and treat **the invariant subgraph as an anchor**, preserving its core structure. **As noted in [1], random truncation may remove discriminative patterns**, resulting in augmented views that deviate too far from the anchor and harm contrastive learning. - **Property c(diversity)** improves generalization by generating **multiple augmented views per subject**, encouraging the model to learn richer representations. Moreover, we impose an **orthogonality loss** to reduce redundancy between views. **This design is supported by prior work [2]**, which shows that generating only one positive view may limit the model’s ability to learn representative features. We believe these design choices make our augmentation strategy better **aligned with the characteristics and challenges of brain graphs in the introduction.** [1]Respiratory, cardiac, EEG, BOLD signals and functional connectivity over multiple microsleep episodes [2]Supervised contrastive learning **Q2:** Thank you for the question. In our **node feature extraction module**, we take the brain structure into account by carefully designing the convolutional kernels to align with the **topological properties of structural brain networks**. Specifically, our model employs **edge-to-edge** and **edge-to-graph** convolutional filters. One dimension of **each kernel is set to match the number of brain regions (ROIs)**, allowing connections that belong to the same region to be **updated synchronously**. This design ensures that the local structural context of each brain region is preserved and effectively utilized during feature learning. **Q3:** Thank you for the insightful question. In the context of invariant learning, **different environments are typically characterized by variations in data distributions**. In our framework, both $G^{a_1}$ and $G^{a_2}$ are **generated from $G_k^{inv}$ by $G^{a}=\zeta(G^{inv}\oplus\Delta{G})$**, $\Delta{G}$ represents distinct spurious subgraphs. - These spurious components are disentangled from the invariant structure via Eq. (6). Therefore, all three graphs originate from the same subject reflect the **original graph under different distributional conditions**, which aligns with the concept of multiple environments in invariant learning. - By enforcing the **invariant loss** $\mathcal{L}_{\text{inv}}$, the extracted $G^{inv}$ is encouraged to satisfy **Property (b)** in **Definition 3.3**—**maintaining prediction stability under different environments**. This helps mitigate the **distribution shift problem** commonly observed in brain graph data. **Q4:** Yes, thank you for pointing this out, and we apologize for the confusion. We have modified it and we have revised the notation throughout the paper for consistency and have included a **summary table of all symbols** in **[Table 4]** for clarity.(https://anonymous.4open.science/r/CIA-GCL-Experimental-Results-F8B5/Experimental_Results.md) --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response. They have provided thorough replies to the feedback and conducted additional experiments to address the raised concerns. The rebuttal demonstrates rigorous efforts in revising the manuscript, and the revisions have effectively resolved most of the identified issues. In detail, the authors have included new comparative analyses in the rebuttal, comprising figures and tables that involve seven additional algorithms. The original manuscript already compared nine methods, making the experimental comparisons in this paper highly comprehensive. Notably, while many of the previously evaluated algorithms were traditional methods, the newly added ones focus on more recent advancements closely aligned with the paper's scope, such as contrastive learning-based models and techniques targeting OOD generalization. Besides, the authors have also refined the notation, further enhancing the paper's readability. Overall, the experiments in this study are extensive, with robust coverage across compared algorithms, evaluation metrics, and datasets. --- Reply to Comment 1.1.1: Comment: Dear Reviewer aeNH, We sincerely appreciate you taking the time to review our response and recognizing the improvements in our experimental section. Due to time limits, we initially reported results on ABIDE I (Table 1 setting). We have now completed all experiments on 3 datasets with 6 methods under 2 settings and T-tests. We hope you can take some time to review the newly added experimental results(Tables1,2 & Fig1 https://anonymous.4open.science/r/CIA-GCL-Experimental-Results-F8B5/Experimental_Results.md. Any further feedback or suggestions would be immensely valuable for helping us enhance this work. Once again, we sincerely appreciate your positive feedback and recognition of our work. We hope our responses have adequately addressed all concerns, and we would be most grateful if you could consider increasing the evaluation score accordingly. Should any additional clarifications be required, we remain fully available to provide them. All authors of paper 3026.
Summary: This paper proposes a causally invariance-aware augmentation for brain graph contrastive learning, utilizing a learnable brain invariant subgraph, which is identified based on a causal decoupling approach to capture the maximum label-related invariant information with invariant learning. The paper is well-organized, and the experimental results show promising performance. Claims And Evidence: The submission makes strong claims about the advantages of invariant learning for OOD generalization in brain network classification, but some claims lack clear and convincing evidence. Specifically, the method's novelty is questionable, as key components resemble existing techniques like GSAT [1]. Methods And Evaluation Criteria: The approach does not explicitly account for the unique properties of brain networks, such as ROI consistency, which raises concerns about its biological validity. Theoretical Claims: The approach is theoretically sound. Experimental Designs Or Analyses: The absence of comparisons with relevant contrastive learning baselines weakens the empirical support for the claims. Strengthening these aspects with additional experiments and discussions would improve the submission. Supplementary Material: I check most of the supplementary material Relation To Broader Scientific Literature: Some related works are not discussed [2,3] Essential References Not Discussed: [1] Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism. ICML 2022 [2] Contrasformer: A Brain Network Contrastive Transformer for Neurodegenerative Condition Identification. CIKM 2024 [3] Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling Model. TNNLS 2024 Other Strengths And Weaknesses: Strengths: 1. The work effectively demonstrates that invariant learning offers advantages in tackling out-of-distribution (OOD) problems, in addition to potential performance improvements. 2. The paper is well-organized, making the methodology and findings easy to follow. 3. The causal relationship underlying the approach is theoretically sound. 4. The proposed method achieves competitive performance compared to recent algorithms for brain network classification tasks. Weaknesses & Areas for Improvement: 1. The invariant subgraph extraction method closely resembles the one used in GSAT [1]. Additionally, the contrastive learning component is adapted from existing techniques rather than introducing a novel approach. As a result, the proposed method appears to be a combination of existing components tailored for brain network classification, rather than introducing substantial innovation. 2. The method does not explicitly address the unique characteristics of brain networks. Unlike general graph datasets, brain network datasets share the same regions of interest (ROIs) across different graphs. Subgraphs with the same structure but different ROIs may serve different functional roles, which is not considered in the current approach. Extracting invariant subgraphs without incorporating ROI consistency may lead to biologically inconsistent representations. Given these concerns, applying the method to general graph OOD datasets instead of brain networks may be more appropriate. 3. While the paper discusses differences from existing graph contrastive learning methods, these models are not included in the empirical study. Additionally, other contrastive learning models specifically designed for brain networks, such as [2,3], are not discussed. Including these in the discussion and empirical comparisons would strengthen the paper's contributions. Other Comments Or Suggestions: NA Questions For Authors: See weakness Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are deeply grateful for your insightful and constructive comments. We have carefully addressed each of them as follows. **[Claims]** Thank you for raising this important point. We acknowledge that [1] also incorporates invariant learning by using an information bottleneck-based loss to extract subgraphs. However, we combine **two forms of invariant learning**—**Invariance Optimization** and **Explicit Representation Alignment**—whereas [1] adopts only the former. (Please refer to our response to **Reviewer bXt3 – [C2]** for a detailed comparison.) **For the novelty of our approach, please see [W1]** **[Experiment]** Thank you for this suggestion. We agree that your comment helps strengthen the experimental support of our claims. For new comparative experiments, please see **Table 2 & Fig1** (https://anonymous.4open.science/r/CIA-GCL-Experimental-Results-F8B5/Experimental_Results.md) **[Essential References Not Discussed]** Thank you for highlighting the missing references and we provide the analysis as follows: - Regarding **[1]** , we have already discussed its connection and differences with our work in the **[Claim]** section. - For **[2]** , we have added experimental comparisons with this method. **[3]**focuses on modeling **signed graphs**, aiming to address limitations of existing GNNs on unsigned data. This differs from the distribution shift problem we target. - As shown **Table 3**, we analyze the augmentation methods of these two models. Specifically, **[2]** performs ROI-level contrastive learning by treating the same ROI across different subjects as positive pairs, and different ROIs ( across subjects) as negative pairs. **[1]** constructs augmented views by generating subgraphs from different time segments of BOLD signals. Table 3: Graph augmentation method comparison | Property | HSGPL | Contrasformer | | --------------- | ----- | ------------- | | label-perserve | ✓ | ✓ | | all time series | | ✓ | | learnble | | | | invariant-aware | | | | diversity | | | **[Weekness]** **W1:** Thank you for raising this important point. The **comparison with GSAT** has been addressed in detail in the **[Claims]** section. Our method is specifically designed for brain network classification and **contributes in two key aspects**: - brain invariant subgraph extraction module: We jointly optimize three loss terms (Eqs. 11–13) to ensure $G^{inv}$ satisfies both **causal and invariance properties in Definition 3.3**; - Causal aspect: We introduce a novel causal loss $\mathcal{L}_{cau}$ beyond standard ERM, encouraging subgraphs to **maximize shared information among same-label samples** (Theorem 3.4). - Invariant aspect: Our method integrates two methods (see **Reviewer bXt3 – C2**). The combined use of $\mathcal{L}{{inv}}$ and $\mathcal{L}_{con}$ promotes invariance by **aligning both prediction risks and latent features across environments.** - Contrastive learning: While using standard contrastive loss, **the way we construct positive and negative samples is novel** based on the invariant-aware augmentation strategy. The innovation of this strategy are discussed in **Reviewer aeNH – Q1**. We hope this clarifies the novelty of our approach and its distinction from existing methods. **W2**:Thank you for raising this important concern. We fully acknowledge the importance of ROI consistency in brain network analysis. We **suggest that our method takes ROI consistency into account, which we elaborate on from the following three aspects.** - We perform **subject-specific subgraph extraction** rather than forcing identical subgraph structures across individuals. An invariant subgraph $G_k^{inv}$ is extracted for k-subject to ensure personalized yet functionally consistent representations. - To promote functional alignment, we introduce the **causal loss** $\mathcal{L}_{\text{cau}}$, which **encourages subgraphs with the same label to share informative structure**. This effectively aligns functionally relevant patterns across individuals within the same class. - As shown in **Figure 3 on page 8**, we visualize the average adjacency matrices of the extracted invariant subgraphs for ASD and TC groups. Clear group-level differences emerge, particularly in the red-boxed regions—indicating stronger connectivity in specific ROIs for ASD subjects. This supports that the extracted subgraphs are not only structurally discriminative but also **functionally aligned**. **W3**: Thank you for your thoughtful comment.We have **added two additional contrastive learning baselines** to our experiments, including **[2]**, as you suggested. Since **[3]** does not provide publicly available code, we included **A-GCL**, a representative contrastive learning method (**Table2 & Fig1**- https://anonymous.4open.science/r/CIA-GCL-Experimental-Results-F8B5/Experimental_Results.md)
Summary: This work proposed a Causally Invariance-aware Augmentation for brain Graph Contrastive Learning to address the challenges of data shift in multi-site brain data. Outstanding performance by their method has been shown in the experiments on ABIDE-I, -II, and ADHD200. While the method presentation is not easy to follow given the figures lack of descriptions, and the evaluation of their key contributions is not explicitly done in the experiments as ablation studies of the proposed components are missed. Claims And Evidence: 1. In the introduction, there is no reference or evidence provided to support the challenging problems of the low signal-to-noise ratio of fMRI and the significant distribution shift of brain graph data. 2. The claim of this work being "the first attempt to incorporate invariant learning into brain graph studies" is too strong given existing works for brain network analysis using invariant learning I can easily find in the Google Scholar [1][2][3][4][5]. [1] Mach, Mathieu, et al. "Connectome embedding in multidimensional graph spaces." Network Neuroscience 8.4 (2024): 1129-1148. [2] Li, Xinhui, et al. "Learning pipeline-invariant representation for robust brain phenotype prediction." (2023). [3] Xu, Jiaxing, et al. "BrainOOD: Out-of-distribution Generalizable Brain Network Analysis" ICLR (2025) [4] Arora, Mehul, et al. "HyperGALE: ASD Classification via Hypergraph Gated Attention with Learnable HyperEdges." 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. [5] Wei, Boyang, et al. "Causal invariance guides interpretable graph contrastive learning in fMRI analysis." Alexandria Engineering Journal 117 (2025): 635-647. 3. No reference is provided to support the claim of " In other literature, brain graphs can also be referred to as brain functional networks" in Sec 3.1. Methods And Evaluation Criteria: There is no description in the main text for Fig. 1 of the CIA-GCL framework, which contains several abstract elements such as subfigure (b) and (c). Meanwhile, two separate sections of Graph Augmentation further confused me in understanding this method. To my best understanding, this method is not designed explicitly for the challenging issue targeted by this work, the distribution shifting in multi-site fMRI data. This issue of distribution shifting is not defined in the paper. This method first extracts invariant subgraphs and spurious subgraphs via $\Phi$. Then, augment invariant subgraphs as an invariance-aware graph augmentation $g$. Last, the predictor $w$ takes the feature of augmented subgraphs followed by three loss functions supervising contrastive learning, invariance, and the causality between labels. Theoretical Claims: I didn't find issues in theoretical claims. Experimental Designs Or Analyses: Overall, given experimental results on three datasets showing outstanding performance by the proposed method, the soundness is good. As well as the validity is good since cross-validation was proceeded. They compared with recent state-of-the-art methods of GNN and causality-inspired networks. But, some state-of-the-art methods with high performance for fMRI data [1][2] and those explicitly designed for OOD problem [3] are not compared. [1] Kan, Xuan, et al. "Brain network transformer." Advances in Neural Information Processing Systems 35 (2022): 25586-25599. [2] Bedel, Hasan A., et al. "BolT: Fused window transformers for fMRI time series analysis." Medical image analysis 88 (2023): 102841. [3] Chen, Yongqiang, et al. "Learning causally invariant representations for out-of-distribution generalization on graphs." Advances in Neural Information Processing Systems 35 (2022): 22131-22148. Instead of deleting loss functions, the ablation studies should be proceeded by removing either $\Phi$ or $g$ as these two components are claimed as the key contributions in this work. Supplementary Material: Codes and data are released in the Supplementary Material. Relation To Broader Scientific Literature: The contribution of being "the first attempt to incorporate invariant learning into brain graph studies" is suspicious given other literatures about brain network invariant learning [1-5]. [1] Mach, Mathieu, et al. "Connectome embedding in multidimensional graph spaces." Network Neuroscience 8.4 (2024): 1129-1148. [2] Li, Xinhui, et al. "Learning pipeline-invariant representation for robust brain phenotype prediction." (2023). [3] Xu, Jiaxing, et al. "BrainOOD: Out-of-distribution Generalizable Brain Network Analysis" ICLR (2025) [4] Arora, Mehul, et al. "HyperGALE: ASD Classification via Hypergraph Gated Attention with Learnable HyperEdges." 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. [5] Wei, Boyang, et al. "Causal invariance guides interpretable graph contrastive learning in fMRI analysis." Alexandria Engineering Journal 117 (2025): 635-647. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: The highest accuracy of the proposed method compared to other SOTA models. Weaknesses: Paper presentation needs to be improved.. Other Comments Or Suggestions: None Questions For Authors: 1. What is the difference between the proposed invariant subgraph extraction and previous methods? 2. What is the benefit of using the proposed methods of subgraph extraction and invariance-aware augmentation regarding the OOD problem? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **[Claims]** **C1**.We have **added references to better support** the two challenges - Low SNR: [1,2] highlight the noise in fMRI data-posing challenges for analysis - Distribution shift: [3,4] show that site and individual variability introduce distributional heterogeneity in brain graphs [1]Reduction of physiological fluctuations in fMRI using digital filters;[2]Lowering the thermal noise barrier in functional brain mapping with magnetic resonance imaging;[3]Distributionally-Adaptive Variational Meta Learning for Brain Graph Classification;[4]Individual variability in functional connectivity architecture of the human brain **C2**.We sincerely apologize for the overstatement regarding the novelty and thanks for pointing this out. We **have revised the novelty claim** (see **Reviewer MG4v–W1 for analysis**) and **analyze the method you mentioned** 🔹In our paper, **invariant learning** refers to strategies for OOD generalization. As classified in "Out-Of-Distribution Generalization on Graphs: A Survey", this falls into **two types: Invariance Optimization and Explicit Representation Alignment** [1]The**“invariant” refers to graph-theoretic properties** (e.g., node degree), not invariant learning [2]Uses **sMRI images**, while we focus on **fMRI time-series** processed through a unified pipeline. Their invariant learning is based on differences pipelines [3]Applies **Invariance Optimization** via information bottleneck [4]Refers to **permutation invariance** in readout layers, not related to our setting [5]Also falls under **Invariance Optimization** by contrastive loss between causal subgraphs 🔹**Our invariant learning combines both Invariance Optimization and Explicit Representation Alignment** - We define a brain invariant subgraph (**Definition 3.3.**) based on Assumption 3.2, and then extract the invariant subgraph using Eq.6 ( **Invariance Optimization by causal decoupling $L_{cau}$**) - Then, we construct a distribution-shifted augmented set $\mathcal{G}^\acute{e}$ as environments via our **invariance-aware augmentation strategy**, and then minimize $L_{inv}, L_{con}$(**Explicit Representation Alignment** by aligning prediction risks and latent features) - This joint framework enhances the extraction of $G^{inv}$ via **dual invariance learning objectives** - Our **ablation studies empirically validate the effectiveness.** Using only $L_{cau}$ reflects pure invariance optimization, while introducing $L_{inv}$ and $L_{con}$ incorporates representation alignment **[Methods]** **M1**.We provide a description of the framework here:(a) extracts $G^{inv}$ via learnable mask, guided by the three loss in (c). (b) uses $G^{inv}$, $G^{s}$ to generate augmented graphs via the invariance-aware strategy (upper dotted box), which are then encoded into graph-level features $\boldsymbol{z}^E = g(G^E) = Pooling(GCN(\mathcal{G}^E))$ (lower dotted box) **M2**.Under the Assumption 3.2,each brain graph is modeled as a combination of $G^c$ and $G^s$. The **distribution shift problem** refers to the phenomenon where subjects sharing the same underlying invariant substructure $G^c$ may still appear statistically different due to the influence of $G^s$, as illustrated in **Eq. 1** $$ P(Y \mid G^c, G^s = g_1) \ne P(Y \mid G^c, G^s = g_2), \quad \text{where } g_i = f_{\text{env}}(E_i) \tag{1} $$ For how our method addresses this challenge, see Q2 **[Experiments]** **E1**.We added comparisons with the mentioned methods in Table2 & Fig1(https://anonymous.4open.science/r/CIA-GCL-Experimental-Results-F8B5/Experimental_Results.md) **E2**.In the **ERM-only** setting, we **remove the augmentation** and use only a shared encoder. The subgraph is extracted without constraints and trained via cross-entropy alone Adding $L_{cau}$ validates the **causal aspect**, while adding $L_{con}$, $L_{inv}$ brings in the **augmentation strategy**, allowing us to assess its effectiveness. Gradually adding these losses enforces the subgraph satisfies the two defined properties **[Questions]** **Q1**: We have provided a detailed explanation in **Reviewer MG4v–W1, Point 1** **Q2**:Our approach addresses the OOD problem by **jointly optimized through two invariance-driven objectives** (see [C2]) based on the **invariant subgraph extraction** and the **augmentation strategy** - We generate distribution-shifted graphs $\mathcal{G}^E$ per subject via our **augmentation strategy**, and use them to optimize both $L_{cau}$ *and* $L_{inv}$ - These graphs are built by **expanding the extracted invariant subgraph** $G^{inv}$, used as the **anchor**, which helps preserve key features and avoid harmful perturbations[6] Together, **the two modules are tightly integrated**: the augmentation depends on high-quality $G^{inv}$, while the learning objectives guide the extraction of robust, invariant substructures—leading to improved OOD generalization [6]Sega: Structural entropy guided anchor view for graph contrastive learning --- Rebuttal Comment 1.1: Comment: # [Claims] The authors bring new literature on OOD generalization to categorize CIA-GCL and others. In the original version, they were motivated by two challenges: low SNR and a shift in the distribution of brain graphs. The motivation is still unclear to me: 1. Why combine two types of invariant learning for OOD generalization? 2. Why previous methods stratified by above two categories cannot address the challenges in brain graphs? # [Methods] **M1.** The description of the framework is still not connected with the main text. For example, which equations corresponded to different blocks in the framework? What are those symbols that aren't described in the main text, like $Edge_{ij}$? **M2.** I cannot find empirical or theoretical evidence to support Eq.1. There is no supervision between $G^c$ and $G^s$ as well. By the way, I cannot find the equation how to get $G^c$. # [Exp] - Why was only the ABIDE-I evaluated? - Which setting were these new evaluations used, Table 1 or 2 in the original version? - Are these three loss functions equivalent to the proposed components illustrated in Fig 1? They appear sumed together to blocks (a) and (b) without a clear correspondence indicated. --- Reply to Comment 1.1.1: Comment: Thank you very much for carefully reading our reply and providing valuable suggestions again [Claims] Distribution shift and low SNR are widely recognized challenges in brain graph analysis. While our motivation shares common ground with existing works, **our solution differs substantially** - To mitigate low SNR, we consider small-world property of brain graphs[1] and propose Assumption 3.2, **reformulating the problem as learning a causal and invariant subgraph** - For distribution shifts,we found existing augmentation strategies may unsuited for brain graphs, so we **design a brain-specific, invariance-aware augmentation** integrated with invariant learning C1: - complementary strengths: - Invariance optimization:seeks to identify invariant structures by specific loss, guiding the model to **internally discover** features - Representation alignment:**imposes external constraints** by aligning representations across environments, enforcing invariance under distribution shifts - fMRI are sensitive to external factors[2]: Our augmentation strategy simulates such environmental changes and enforces model to align both features and predictions This dual approach allows us to **both discover and enforce invariance**. We hope this unified approach provides a promising direction for brain graph analysis C2:Prior methods user only one type of invariance learning, **which may be insufficient** for complex fMRI brain data. We combine contrastive learning and invariant learning, both known to improve generalization. Our augmentation strategy **serves as a bridge between the two types of invariance learning**, as discussed in previous Q2. It jointly optimizes complementary objectives to enhance invariance extraction. Ablation results further validate this design [Methods] M1:Thanks for pointing this out. We clarify the mapping below - Eq.(1–5)—Module(a), Eq.(7–10)—Module(b), Eq.(6, 11–13)—Module(c) - The $Edge_{i,j}$ refers to edge feature $e_{ij}$ in the original version(Lines 198–200,right column of p4). We apologize for the inconsistency and **have added a symbol table**(Table 4,https://anonymous.4open.science/r/CIA-GCL-Experimental-Results-F8B5/Experimental_Results.md) M2: 1. Eq.1 formalizes distribution shift problem. Although it is not derived from theory, **it is motivated by empirical findings**: - Brain networks exhibit small-world properties[1], and key structures(e.g., DMN)consistently differ across ASD and TC[3], suggesting the existence of a shared causal subgraph - Distribution shifts may cause overfitting to spurious environment(e.g. site), rather than learning population-invariant features[4] Hence, we user Eq.1 as a conceptual tool to describe the shared subgraphs($G^c$)may yield different predictions under varying spurious conditions($G^s$) 2. **$G^c$ is actually $G^{inv}$**,described in Lines 125–130(right column of p3). We used $G^c$ in the prior reply to emphasize its causal role under Assumption 3.2. We apologize for not clarifying the notation earlier 3. The learning of $G^c$(i.e., $G^{inv}$)is **supervised by the label via cross-entropy loss** in Module(c)($R^{inv}$). The decoupling between $G^c$ and $G^s$ is achieved by the($minI(G^{inv},G^{s})$)in Eq.11, which helps reduce spurious interference and encourage $G^{inv}$ to focus on causal structures [Exp] E1,E2:Due to time limits, we initially reported results on ABIDE I(Table 1 setting). We have now completed all experiments **on 3 datasets with 6 methods under 2 settings and T-test**(Tables 1,2&Fig1.We hope you can take some time to review the newly added results.https://anonymous.4open.science/r/CIA-GCL-Experimental-Results-F8B5/Experimental_Results.md) E3:Yes, they are. In detail: - $L_{cau}$:pulls together $G^{inv}$ from same-label subjects(*green lines between subjects*), and push apart $G^{inv}$ and $G^{s}$ from same subject(*red lines within subject*).This is also discussed in Section 3.3 (aligned with Module (a)), which elaborates on the “causal decoupling” in the title and illustrates how our framework separates $G^c$ and $G^s$ - $L_{inv}$:encourages prediction consistency across environments($R^{inv}$, $R^{a_1}$, $R^{a_2}$) - $L_{con}$:aligns features across environments for same subject(*green lines within subject*) [1]Human connectome:Structural and functional brain networks[2]Automatic denoising of functional MRI data:Combining independent component analysis and hierarchical fusion of classifiers[3]Atypical functional brain connectivity during rest in autism spectrum disorders[4]Contrasformer:A Brain Network Contrastive Transformer for Neurodegenerative Condition Identification Thank you once again for your valuable suggestions, which have greatly contributed to improving our paper. We hope our responses have adequately addressed your concerns, and we would be most grateful if you could consider increasing the score accordingly. Should any further clarifications be needed, we would be happy to provide them
Summary: The paper "Causal Invariance-aware Augmentation for Brain Graph Contrastive Learning" (CIA-GCL) introduces a method to enhance generalization and interpretability in brain graph classification by leveraging causal invariance-aware learning within graph contrastive learning. The proposed approach aims to address distribution shifts commonly present in multi-site datasets by extracting brain invariant subgraphs and using them for augmentation. Claims And Evidence: Most claims are supported by empirical results, but additional validation (e.g., statistical significance tests, domain expert evaluation) could further strengthen them. Methods And Evaluation Criteria: The proposed methods are well-aligned with the problem of brain graph classification under distribution shifts. The benchmarks used (ABIDE, ADHD200) are widely accepted in brain graph learning, making the results reliable and relevant. Theoretical Claims: The paper introduces theoretical formulations for: Invariant Subgraph Definition (Assumption 3.2, Theorem 3.4, 3.5), Causal Learning Objective (Equation 6), Mutual Information Maximization (Equations 11, 12, 13). These claims appear reasonable based on prior work in causal learning and OOD generalization. Still, the theoretical grounding relies on causal assumptions that might not strictly hold in real-world brain data. Experimental Designs Or Analyses: The experiments are well-structured, including: Benchmark evaluations on three datasets. * Ablation studies on loss functions. * Hyperparameter sensitivity analysis. * Interpretability analysis (biomarker detection). Potential Issues: * The experimental design does not include domain experts validating the identified biomarkers, making the interpretability claim somewhat weaker. * No significance tests are performed, which could impact result reliability. * Hyperparameter selection methodology is unclear—some choices (e.g., edge ratio selection) may be dataset-specific. Supplementary Material: The appendices provide: Proofs of theoretical claims (Appendix C); Detailed algorithmic implementations (Appendix A); Additional experimental details (Appendix D, E) However, the supplementary materials were not explicitly reviewed for correctness beyond the proofs being outlined. Relation To Broader Scientific Literature: The paper builds on graph neural networks (GNNs), contrastive learning, and OOD generalization. The work is well-situated within brain graph analysis, contributing a novel integration of causal learning into GCL. Missing connections: * The paper does not discuss recent advances in self-supervised learning on graphs (e.g., methods beyond contrastive learning, such as masked prediction approaches). * Other OOD generalization techniques (e.g., domain adaptation, meta-learning) are underexplored. Essential References Not Discussed: The paper largely cites relevant literature. Other Strengths And Weaknesses: Strengths: * Novel Integration of Causal Learning and GCL: The paper presents a creative combination of causal invariance learning and graph contrastive learning (GCL), which has not been extensively explored in brain graph analysis. * Improved Generalization on Multi-Site Data: The focus on OOD generalization is crucial for real-world medical applications where dataset shifts frequently occur. * Interpretability through Invariant Subgraph Extraction: The method provides interpretable insights by identifying brain invariant subgraphs, which can highlight potential biomarkers for brain disorders. * Comprehensive Experimental Validation: The three real-world datasets (ABIDE I, ABIDE II, ADHD200) and comparison with SOTA methods ensure that the method is rigorously tested. * Detailed Ablation Studies and Hyperparameter Sensitivity Analysis: The inclusion of ablation studies on loss functions and sensitivity analysis on key hyperparameters strengthens the empirical evaluation. Weaknesses: * Limited Validation of Extracted Invariant Subgraphs: While the method claims to extract biologically relevant invariant subgraphs, it does not include validation from neuroscientists or clinical experts to confirm whether these subgraphs correspond to actual biomarkers. * Lack of Statistical Significance Tests: The reported improvements lack statistical significance testing (e.g., p-values or confidence intervals), making it difficult to assess the reliability of the reported gains. * No Direct Comparison with Recent Self-Supervised Graph Learning Methods: While the paper compares against contrastive learning methods, it does not evaluate how non-contrastive self-supervised methods (e.g., masked autoencoders, graph predictive coding) perform in the same setting. * Assumptions About Distribution Shifts May Be Overly Simplified: The method assumes that label variations reflect distributional shifts, which may not always hold in real-world settings. * Computational Complexity and Scalability Not Addressed: The proposed approach involves graph partitioning, augmentation, and contrastive training, which could introduce computational overhead. The paper does not discuss the scalability of CIA-GCL to larger datasets. Other Comments Or Suggestions: No Questions For Authors: How does CIA-GCL compare to non-contrastive self-supervised learning methods? What is the computational cost of CIA-GCL? How does CIA-GCL compare to non-contrastive self-supervised learning methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are deeply grateful for your insightful and constructive comments. We have carefully addressed each of them as follows. **[Claims]** **C1**. **statistical significance tests** : We sincerely thank the reviewer for the valuable suggestion regarding enhancing the experimental validation. We have **included the t-test results in Table 1**. https://anonymous.4open.science/r/CIA-GCL-Experimental-Results-F8B5/Experimental_Results.md **C2**. **domain expert evaluation** : Thank you for the valuable suggestion. We agree that involving domain experts would further strengthen the interpretability analysis. This is an important direction for our future work, and **we plan to collaborate with medical professionals** to conduct more comprehensive expert evaluations. - Although we did not directly involve neuroscientists or clinicians in this study, we sought to validate the identified subgraphs and important brain regions by **comparing them with findings reported in neuroscience and medical literature**. Specifically, we examined whether the brain regions and connections identified by our method overlapped with those previously associated with the disorder. **The observed consistency provides supporting evidence** that the identified biomarkers are meaningful and clinically relevant. - In the original manuscript (**Figure 3 on page 8**), we **visualized** the connectivity matrices of the **invariant subgraphs extracted for ASD and TC. The red box in the middle of the figure belongs to the occipital region**. It can be seen that the intensity of ASD in this area is higher than that of TC patients, which is **consistent with the findings of literature [1]**. - Additionally, we identified and visualized the **top 10 most important brain regions** based on the invariant subgraphs. Literature review shows that several of these regions align with prior clinical and neuroimaging findings. [1]Functional brain organization for visual search in ASD **[Theoretical Claims]:** We thank the reviewer for raising this point. **Our core assumption** is that brain graphs contain a causally relevant subgraph linked to the clinical label, while the remaining parts are influenced by site variability and individual differences. Although grounded in causal learning, **our framework is flexible and does not require strict equality conditions**. This assumption is motivated by practical challenges in brain graph classification, and our goal is to mitigate them by identifying key substructures relevant to the task. **[Experiment]** **E1.** Thank you for the comment. We have provided a detailed explanation in **[C2]** **E2.** Regarding the verification of significance tests, please see **[C1]** **E3.** Thank you for the insightful comment. **Our parameter selection is not dataset-specific**. Here, we provide additional clarification: 1)We tuned three key hyperparameters—the edge ratio and two loss coefficients—based on experiments on the ABIDE I dataset with the AAL atlas, selecting the values that yielded the best accuracy. **2)Once the optimal values were determined, we fixed these hyperparameters across all datasets** to ensure consistency and not selected for specific dataset. **[Literature]** To address this point, we have **added experimental comparisons** with recent self-supervised learning methods(METAFormer)and OOD generalization approaches(CIGA,BrainIB). **Please refer to Table2 & Figure1** https://anonymous.4open.science/r/CIA-GCL-Experimental-Results-F8B5/Experimental_Results.md **[Weaknesses]** For our response to **W1 ,W2 and W3**, we kindly refer the reviewer to [C2],[C1] and [Literature]. **W4:** We thank the reviewer for this thoughtful observation. **As acknowledged in the Conclusion** section,our use of labels as a proxy for distributional shifts is a simplification. We adopted this approach **as an initial step**, given that labels often reflect key data differences, and **we plan to refine the identification of distribution shifts in future work.** **W5**:We thank the reviewer for raising this point. Our method **has relatively low memory consumption** — the GPU memory usage on an NVIDIA GeForce RTX 3090 is approximately 3000MiB, as the model architecture is lightweight and does not rely on complex components. While the **training time** is longer (~3 hours), this is mainly due to the subject-specific processing, where invariant subgraphs are extracted individually for each brain graph to better address inter-subject variability. The detailed computation process is provided in the pseudocode in Appendix A. Importantly, **the method is scalable to larger datasets, as the operations are applied in a per-subject and batched manner**, which does not introduce significant computational bottlenecks. **[Questions]** - For our response to **Q1 , Q3**, we kindly refer the reviewer to [Literature]. - For our response to **Q2**, we kindly refer the reviewer to [W5]. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the rebuttal. I will keep my original score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer nb4i, Thank you for your professional and valuable review. Under your guidance, we have thoroughly reviewed the literature and incorporated comparative methods along with statistical test results. We hope our responses have adequately addressed all your concerns. We sincerely appreciate your contributions to enhancing the quality of our manuscript, as well as your encouraging feedback. Once again, we deeply appreciate your positive feedback and recognition of our work. All authors of paper 3026.
null
null
null
null
null
null
A Theoretical Justification for Asymmetric Actor-Critic Algorithms
Accept (poster)
Summary: This paper investigates reinforcement learning in partially observable environments. The authors present finite sample analysis for neural actor-critic methods, comparing both asymmetric and symmetric critics. Their findings elucidate the relationship between the use of an asymmetric critic and the reduction of aliasing errors in the agent state. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: N/A Supplementary Material: I did not check the math in supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: The paper presents a finite sample analysis for neural actor-critic methods, comparing both asymmetric and symmetric critics. The convergence rate achieved, $O(1/\sqrt{T} + 1/\sqrt{N} + 1/\sqrt{K})$, aligns well with state-of-the-art analyses in the field. The authors demonstrate that the actor-critic method using a symmetric critic may introduce aliasing errors, which does not arise with asymmetric critics. Weakness: Please refer to Question section. Other Comments Or Suggestions: Please refer to Question section. Questions For Authors: 1. As I am not an expert in reinforcement learning within the POMDP setting, I found the background section somewhat confusing. Specifically, I would like clarification on the state space and agent state space: which one represents the state space accessible to the agent, and which one corresponds to the underlying true state space? Additionally, what is the distinction between $U$ and the policy $\pi$? Clearer definitions would greatly aid readers who may not have extensive knowledge of POMDPs. 2. In the definition of POMDPs, do both $|S|$ and $|A|$ need to be finite? I believe both are assumed to be finite in [1]. 3. In Algorithms 1 and 2, the initial state-action pairs are required to be sampled from a discounted visitation distribution. Why can’t the starting point be chosen arbitrarily from the state-action space? Additionally, how is the starting point sampled from the discounted visitation distribution? Reference: [1] Cayci, Semih, Niao He, and R. Srikant. "Finite-time analysis of natural actor-critic for pomdps." SIAM Journal on Mathematics of Data Science 6.4 (2024): 869-896. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your review and for hihglighting that we attain state-of-the-art convergence rate. Below, we answer to your remarks. **Environment and agent states.** We agree that some other key concepts could be better introduced in the background. The environment state is typically inaccessible to the agent, which is provided with a stream of observations instead. From these observations, the agent maintain a state, updated recurrently after each action and new observation, that we call the agent state. This is the state accessible to the policy. In the asymmetric learning paradigm, we assume that in addition to the agent state, the agent is provided with the environment state *at training time only*. **We propose to improve the background to better explain and illustrate with examples what could be typical observations, environment states, agent states, update functions, and what would be aliasing in this context.** We could notably use the example discussed in our response to Reviewer UL7j. **Update function and policy.** In a sense, the update function simply implements the processing of the history that is done before passing the (stochastic) feature $z$ of the history $h$ to the policy $\pi$. Together, the update function $U$ and the policy $\pi$ thus implement the history-dependent function that controls the POMDP. Only the policy $\pi$ is considered learnable, and common examples for the update function $U$ are a window of past observations, the belief/Kalman filter (if the latter is known), etc. **We propose to clarify this distinction and to give some examples to help readers outside of the field to better understand this.** Please note that we also discuss in the conclusion the fact that the proof straightforwardly adapts to an agent state process $U$ that is learned through RL. **Finite states.** All state spaces ($\mathcal{S}$, $\mathcal{A}$, $\mathcal{O}$ and $\mathcal{Z}$) are considered finite in this analysis. **It is indicated in Subsection 2.1, but if you feel that we do not insist enough on that in the introduction or anywhere else in the main text, please let us know and we would be happy to address this.** **Initial sampling distribution.** Concerning the temporal difference learning algorithm (Algorithm~1) and the finite-time bound for the critics (Theorem 3 and Theorem 4), any initial distribution $p(s,z,a)$ could have been chosen, and the bound would have been obtained with norm $\lVert\cdot\rVert_p$ instead of $\lVert\cdot\rVert_d$. However, since we need to take samples from the discounted visitation measure $d = d^\pi \otimes \pi$ for the policy-gradient (as is standard for on-policy policy-gradient), the performance bound is expressed in terms of expectation under that distribution. As a consequence, we thus need to know a bound on the critic error under that norm $\lVert\cdot\rVert_d$, which explains why the temporal difference algorithm was studied under that sampling distribution. **We propose to add a note in our paper to clearly state that the analysis of Theorem 3 and Theorem 4 can be straightforwardly adapted to any sampling distribution $p(s,z,a)$.** **Sampling algorithm.** The algorithm implicitly assumes that we have an oracle allowing to sample from this distribution $d = d^\pi \otimes \pi$, as is standard in prior works. **We will clarify this in the algorithm.** In any case, there exists a tractable algorithm for sampling from this distribution. This algorithm samples an initial timestep $t_0 \sim \text{Geom}(1-\gamma)$ from a geometric distribution with success rate $1 - \gamma$, and then the current policy would be used to take $t_0 - 1$ actions in the environments, and would then sample a last action. The environment state $s_{t_0}$, agent state $z_{t_0}$ and action $a_{t_0}$ constitute a sample from this distribution $d$. We hope that you will find these adaptations pertinent to improve the overall accessibility of the document, and please do not hesitate if you need us to clarify something further. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their detailed response. My concerns have been fully addressed. I suggest clarifying the background further, as this would be beneficial for readers who may be less familiar with POMDPs. In light of the improvements made, I am increasing my score to 4. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for increasing their score. We will expand the background to clarify environment and agent states and also make the other changes discussed in our reply.
Summary: This paper analyzes two different versions of a natural actor-critic method for learning policies in finite POMDPs. The first version is "symmetric": both actor and critic depend only on the internal agent state. The second version is "asymmetric". Here, the critic depends also on the underlying environment state, a privileged information that is often available during training. The authors prove convergence bounds for both methods, and show that the asymmetric method has an edge over the symmetric one. Claims And Evidence: All claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: This is a purely theoretical paper. Theoretical Claims: I checked the results for obvious flaws, but did not read the proofs in detail. Experimental Designs Or Analyses: This is a purely theoretical paper. Supplementary Material: I did not read the supplementary material. Relation To Broader Scientific Literature: The authors clearly position their work within the context of the existing literature. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: The paper is very well written. I had no problem following despite not being very familiar with the prior works. Additionally, the result is interesting and seems very relevant. Other Comments Or Suggestions: ### Minor comments - Section 3 (nitpicking): I assume that the set $\mathcal C$ closed and convex, such that the projection is properly defined? Also, do you assume that $\mu$ has full support? Otherwise $\lVert\cdot\rVert_\mu$ is technically a seminorm (not sure if this is relevant). - Section 3.1: I don't think you defined the distribution $d$ in the expressions $\lVert\cdot\rVert_d$. Is it simply $d(s, z, a) = d^\pi(s, z)\pi(a | z)$? - Section 3.2: I suppose now $d(z, a) = \sum_{s \in \mathcal S} d^\pi(s, z)\pi(a | z)$? - Eq. 16: Typo: $\mathcal F^B_\chi$ - Eq. 19: Why is $\pi_\theta$ "log-linear"? Specifically, is $\log \pi_\theta$ really a linear function of $\theta$, even though it includes a log-sum-exp term? - Section 4.1/4.2: Typo: the errors of the Q-function should be measured by a seminorm weighted over not just states and agent states, but also actions, (e.g. the norm $d^\pi(s, z)\pi(a | z)$). The equations here (e.g. Eq. 28) just use $d^\pi$. - Regarding the aliasing error (Eq. 38), does the expected distance between the beliefs depend on $k$? If not, the expression inside the weighted norm could be simplified as $\frac{1}{1 - \gamma^m}\mathbb E^\pi[\lVert\hat b_{0, m} - b_{0, m}\rVert]$ - Eq. 39: I assume that $\pi_t \doteq \pi_{\theta_t}$? - Theorem 5: What is $R$ in the definition of $\zeta$? (Is it the maximum reward?) - In $\varepsilon_{\mathrm{inf}}$, what are $\hat b_k$ and $b_k$? (Does it refer to the subroutine Algorithm 1 inside Algorithm 2?) Questions For Authors: - Line 163 (left column): This is an interesting point. Are you saying that the symmetric Q-function $Q^\pi$ generally does not satisfy the $m$-step symmetric Bellman equation? Is there a counterexample? - Algorithms 1 and 2 assume i.i.d. sampling from $d^\pi$. In a more realistic setting, the samples are generated from online interaction with the environment. Did you consider this setting as well? - What is the motivation for the "asymmetric obective", Eq. 22? If I understand correctly, you want to justify why $w_*^{\pi_\theta}$ is a good parameter update ("it minimizes this loss"). Or are you introducing the loss function $\mathcal L$ for another reason, namely that $w_*^{\pi_\theta}$ is a minimizer of $\mathcal L$ and $\mathcal L$ can be more easily optimized than solving equation (20)? - Can you give some intuition about the assumption Eq. 39? - Line 356 (left column). What do you mean by "asymmetric learning is insensitive to aliasing in the agent state"? Clearly also asymmetric learning is impacted by the error that comes from incomplete state information. (For example, $\varepsilon_{\mathrm{inf}}$ captures some of this error). Additionally, this sentence sounds a bit as though you prove that asymmetric learning is better than symmetric learning. However, the paper presents only upper bounds on the risk, meaning that no such conclusion can be drawn (this would require lower bounds). Could you comment on this, and how much you think this conclusion ("asymmetric is better than symmetric") to depend on the specific algorithm? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for you review. We are happy that you found the paper convincing and accessible. Please find our answers below. **Minor remarks.** All your remarks and typos were valid. The set needs to be closed, we can assume it strictly convex to ensure the uniqueness of the projection. We consider distributions with full support, which is reasonable regarding the policy parametrization (no action has a zero probability). Log-linear policies indeed denote policies whose logarithm is not linear in $\theta$. In Thm. 5, $R$ should be $B$ in the definition of $\zeta$. Finally, $b_k$ and $\hat{b}_k$ are not related to the subroutine, but are well-defined random variables. **We will clarify all this.** **Aliasing Bias.** An example where $Q^\pi(z,a)$ is not equal to $\tilde{Q}^\pi(z,a)$ is the following POMDP with $z_t=o_t$. There are two starting states (L or R), that are observed perfectly. Any action taken from these states always yields a zero reward. The agent can switch to the other starting state, or go to the corresponding absorbing state (G or P). In the absorbing states, he stays there forever with the corresponding reward (±1). The policy is $\pi(a|z)=go,\forall z\in\mathcal{Z}$. It gives $Q^{\pi}(o=L,a=down)=\frac{\gamma}{1-\gamma}$ and $\tilde{Q}^\pi(o=L,a=down)=0$ because $\tilde{Q}^\pi(o=D,a=\cdot)=0$. **If you think this example would be worth being added to the paper, we would be happy to do so.** - $O(L|L)=O(R|R)=1$ - $O(D|G)=O(D|P)=1$ - $R(+1|G,:)=R(-1|P,:)=1$ - $R(0|L, :)=R(0|R,:)=1$ - $A=\\{swap, go\\}$ **IID samples.** We tried deriving the proof for correlated samples (with the usual assumption that we have reached a stationary distribution) following similar steps as [Bhandari et al. (2018)](https://arxiv.org/abs/1806.02450). While it is feasible to derive this proof for the asymmetric critic, the symmetric setting seemed more subtle and we did not succeed. It would thus have prevented a direct comparison. We can cite a concurrent work [(Cai et al., 2024)](https://arxiv.org/abs/2412.00985) that does not make the IID assumption. However, it uses an explicit belief approximation, which makes the comparison with standard asymmetric AC difficult. **Despite not required by the concurrent work policy of ICML, we propose to discuss this reference, because we believe it is an interesting related work.** Please note that Thm. 3-4 could be straightforwardly adapted to any distribution instead of $d$, provided that we still sample IID. **NPG Objective.** The NPG $w_\*^{\pi_\theta}$ is considered a good choice, motivated in the literature from a long time [(Kakade, 2001)](https://papers.nips.cc/paper/2073-a-natural-policy-gradient). It can be seen as a standard PG, but where we select an appropriate metric for the parameter space. However, since computing the Fisher information matrix is not trivial, we prefer to minimize these simple convex losses ($\mathcal{L}$ and $L$), which are both proven to be minimized by the NPG $w_\*^{\pi_\theta}$. **We propose to better explain the NPG in the paper, to make things more clear for the reader.** **Concentrability coefficient.** Eq. (39) shows a concentrability coefficient, assumed to upper bound the ratio of probabilities between the optimal policy and any policy $\pi_t$. It roughly states that the policy should be stochastic enough so as to visit all state-action pairs from the optimal policy with nonzero probability throughout the learning process. Thm. 5 states that the lower this concentrability coefficient, the lower the upper bound on the suboptimality. It notably motivates the initialization of the policy to the max-entropy policy. **We propose to add a better explanation of this hypothesis in the paper.** **Insensitivity to aliasing.** Given the current analysis, you are totally right, we overlooked the $\epsilon_\text{inf}$ term and the claim was too strong. However, after double checking, you pointed out something interesting: this term can be removed in the asymmetric case. Indeed, we could stop at equation (222) instead of (223) so that we keep $A^\pi(S,Z,A)$ instead of using $A^\pi(Z,A)$. Then, in (256-258) we can use the MDP version of Lemma D.1 with $A^{\pi^+}(S,Z,A)$ instead of $A^\pi(Z,A)$, where $\pi^+$ is defined as in Appendix A.1, which does not have the $\epsilon_\text{inf}$ term [(Kakade & Langford, 2002)](https://homes.cs.washington.edu/~sham/papers/rl/aoarl.pdf). Finally, when substituting back (258) in (260), it would make this term disappear from the final result. **While the previous proof was valid, this result is even stronger, and we could add this improvement to the paper.** We also agree that we should not claim that asymmetric learning is insensitive to aliasing, but that we have an *upper bound* that is not dependent on the level of aliasing. **We will fix that in the paper.** We hope that these discussions will have answered your questions. Please do not hesitate to request additional explanations. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarifying comments, and I will keep my score as is. I think it would be beneficial to include the example that you provide here (in the appendix, for curious readers). However, I don't quite understand it yet. If possible, could you please elaborate on the dynamics of this POMDP: how does the state change given the action, and what is the action "down" (is it "swap" or "go")? Thank you! --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your response. We will add the example in the appendix. There was indeed a typo in which “down” should have been understood as the action “go”, sorry for the inconvenience. Here is a more detailed explanation. We consider a POMDP $\mathcal{P} = (\mathcal{S}, \mathcal{A}, \mathcal{O}, T, R, O, P, \gamma)$ where $\mathcal{S} = \\{ L, R, G, P \\}$ (left, right, goal, pit), $\mathcal{A} = \\{ \text{swap}, \text{go} \\}$ and $\mathcal{O} = \\{ L, R, D \\}$ (left, right, down). The agent state $z_t \in \mathcal{Z} = \mathcal{O}$ is considered to be the last observation $o_t$ only, i.e. $U(z_t | z_{t-1}, a_{t-1}, o_t) = \delta_{o_t}(z_t)$. The states and corresponding observations are depicted below, along with the rewards obtained when taking any actions from a given state. We also detail the probability functions $T$, $R$, $O$, and $P$. ``` states observations rewards +---+---+ +---+---+ +---+---+ | L | R | | L | R | | 0 | 0 | +---+---+ +---+---+ +---+---+ | G | P | | D | D | |+1 |-1 | +---+---+ +---+---+ +---+---+ We can swap observable states: - T(L | R, swap) = T(R | L, swap) = 1 We can go to absorbing states: - T(G | L, go) = T(P | R, go ) = 1 We stay in absorbing states - T(G | G, :) = T(P | P, :) = 1 We observe the observable state: - O(L | L) = O(R | R) = 1 We do not observe the absorbing state (aliasing): - O(D | G) = O(D | P) = 1 No reward from observable states: - R(0 | L, :) = R(0 | R, :) = 1 Rewards from absorbing states: - R(+1 | G, :) = 1 - R(-1 | P, :) = 1 Uniform distribution over initial states: - P(L) = P(R) = P(G) = P(P) = 0.25 ``` Let us now consider the policy $\pi(\text{go} | z) = 1, \forall z \in \mathcal{Z}$. Since $Pr(s = G | z = D) = \Pr(s = P | z = D) = 0.5$, we have $Q^\pi(z = D, a) = \tilde{Q}^\pi(z = D, a) = 0, \forall a \in \mathcal{A}$. It is also easy to see that $Q^{\pi}(z = L,a = \text{go})=\frac{\gamma}{1-\gamma}$. However, we obtain $\tilde{Q}^\pi(z = L,a=\text{go})=0$ because $\tilde{Q}^\pi(o=D,a=a)=0, \forall a \in \mathcal{A}$, which indeed results in $Q^\pi \neq \tilde{Q}^\pi$. In addition, it is worth noting that the choice of agent state was reasonable for this POMDP. Indeed, the agent state $z_t = o_t$ is sufficient for optimal control (i.e., there exists an optimal policy conditioned on $z_t$ that is as good as the optimal policy conditioned on the complete history $h_t$).
Summary: This paper provides a theoretical justification for the empirical success of asymmetric actor-critic algorithms in partially observable environments. Using finite-time convergence analysis, the authors prove that asymmetric critic methods (which leverage additional state information during training) eliminate an "aliasing error" term that appears in symmetric methods. This error term arises from the difference between the true belief distribution (given history) and the approximate belief distribution (given agent state). The mathematical analysis demonstrates that this is the only difference in the error bounds between asymmetric and symmetric approaches, providing a clear theoretical explanation for why asymmetric methods often converge faster and perform better in practice. Claims And Evidence: The paper claims that asymmetric actor-critic algorithms eliminate an error term due to aliasing, leading to improved convergence compared to symmetric algorithms. It also claims that the theoretical bounds hold for finite state policies and linear function approximators. The claims are supported by mathematical analysis: 1. Formalize both symmetric and asymmetric actor-critic algorithms 2. Derive finite-time bounds for both approaches 3. Identify the specific error term (aliasing error) that exists in symmetric but not asymmetric methods 4. Show that this is the only difference between the error bounds The paper lacks empirical validation. Methods And Evaluation Criteria: **Methods:** The theoretical approach is sound for analyzing the problem. The authors adapt existing convergence analysis techniques to both symmetric and asymmetric settings, enabling direct comparison. The mathematical framework correctly models partially observable environments and agent behavior. **Evaluation Criteria:** The evaluation is purely theoretical, with no empirical experiments. While the theoretical bounds are well-justified, the lack of empirical evaluation limits the practical relevance of the results. Theoretical Claims: I verified the key theoretical claims, particularly: 1. **Theorems 1 and 2 (Natural Policy Gradients)**: The proofs correctly show that both symmetric and asymmetric approaches optimize similar objectives. 2. **Theorem 3 and 4 (Finite-time bounds)** 3. **Lemma C.1 (Upper bound on aliasing)**: The derivation connecting aliasing error to total variation distance between belief distributions is mathematically sound. The mathematical analysis is rigorous, though some proof steps could benefit from additional explanation for clarity. Experimental Designs Or Analyses: The paper does not include experimental validation, which is a significant limitation. Supplementary Material: The supplementary material contains detailed mathematical proofs that support the main claims of the paper. These proofs are thorough and generally correct, with appropriate adaptations of existing proofs for the asymmetric setting. Relation To Broader Scientific Literature: This work connects to several research directions: 1. **Asymmetric learning paradigms**: Builds on prior work by Pinto et al. (2018) and Baisero & Amato (2022), providing theoretical justification for previously heuristic methods. 2. **POMDP solving**: Relates to approaches for partially observable environments, complementing techniques like recurrent neural networks and belief state modeling. 3. **Convergence analysis**: Extends finite-time bounds from Cayci et al. (2024) to the asymmetric setting. 4. **State representations**: Has implications for representation learning in POMDPs and the benefits of leveraging privileged information. Essential References Not Discussed: The paper covers the most relevant literature, but could benefit from discussing: 1. **Information-theoretic approaches to POMDPs**: The paper by Eysenbach et al. (2023) "Information Asymmetry in KL-Regularized RL" (NeurIPS 2023) directly addresses quantifying and mitigating observation aliasing through an information-theoretic lens, showing how information asymmetry creates similar benefits to the asymmetric actor-critic approach. This work provides complementary theoretical insights about the same underlying issue. 2. Chen et al. (2020) "Learning with Privileged Information for Efficient Image Super-Resolution" (ECCV 2020) employs a teacher-student framework with privileged information during training that closely parallels the asymmetric actor-critic approach. Their theoretical analysis of knowledge distillation in asymmetric settings could provide useful insights for reinforcement learning. Both works approach the problem of leveraging privileged information during training from different angles and could strengthen the contextual understanding of asymmetric learning approaches. Other Strengths And Weaknesses: **Strengths:** - Clear identification of the specific mathematical advantage of asymmetric methods - Rigorous theoretical analysis with well-defined bounds - Addresses an important gap between empirical success and theoretical understanding **Weaknesses:** - Restricted to linear function approximation and finite state policies, limiting applicability to deep RL - Lacks empirical validation even in simple environments Other Comments Or Suggestions: ### Suggestions for Empirical Validation The theory could be empirically validated with a simple, specific POMDP grid world environment: #### **Setup**: - **Environment**: 5×5 grid with deterministic movements. - **True State**: Full (x, y) position of the agent. - **Observation**: Limited 3×3 visual field centered on the agent, showing wall/path/goal. - **Controlled Aliasing**: Multiple visually identical room patterns are deliberately placed in different positions (e.g., hallway segments that look identical but lead to different rewards). - **Precise Aliasing Mechanism**: The environment uses a fixed set of 3×3 templates that repeat in different locations, creating true state aliasing. #### **Control Parameter**: - Vary the template repetition frequency (e.g., 10%, 30%, 50% of states share identical observations with at least one other state). #### **Experiment**: 1. Compare symmetric critic (using only observations) vs. asymmetric critic (with access to true x,y position) 2. Measure learning curves, final performance, and sample efficiency 3. Calculate belief state entropy at each position to quantify the actual impact of aliasing This controlled environment would directly test whether the performance gap between methods increases proportionally with aliasing severity, as predicted by the theoretical bounds in **Theorems 3 and 4**. --- ### Suggestions for Improving Accessibility Provide more intuitive explanations of key concepts, such as aliasing, to make the paper more accessible to a broader audience. Questions For Authors: 1. Could you provide empirical validation of your theoretical findings in a simple environment with controllable degrees of aliasing? This would help verify that the performance gap between symmetric and asymmetric methods increases with aliasing severity as predicted by the theory. 2. How do your theoretical results extend to non-linear function approximation as commonly used in deep RL? The current analysis is limited to linear approximators. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your review. We are happy to read that you appreciated our manuscript and that you considered that we addressed an important gap in the theoretical understanding of these methods. Please find our answers below. **Toy experiment.** We agree that an empirical validation of the findings of this paper could be interesting. However, given the short rebuttal period, it will probably not be possible to obtain these results on time. We hope that you will still consider the results interesting and worthy of publication. **Clarity.** After proofreading Lemma C.1, we completely agree that some things needed to be improved to help the reader understand the philosophy of the proof, and the origin of aliasing. In particular, we think it would be useful to justify the initial expression of $Q$ and $\tilde{Q}$, which would help understanding that the bias comes from the difference in distribution at the bootstrapping timestep $m$, when we condition on $z_m$ versus $h_m$. **We propose to add this discussion in the proof of the Lemma.** **Related Works.** We are not sure what papers you wanted to refer exactly. We found papers with the same titles but different bibliographic data. Could it be that you wanted to refer the papers "Information Asymmetric in KL-Regularized RL" (Galashov et al., ICLR 2019) and the paper "Learning with Privileged Information for Efficient Image Super-Resolution" (Lee et al., ECCV 2020)? **We definitely think that the first paper would be worth mentioning in the introduction.** Regarding the second one, we feel like discussing this vision paper may not be that much relevant. Please note that we already mentioned seminal "teacher-student" approaches in RL in our introduction (Choudhury et al., 2018; Warrington et al., 2019). **Nonlinear approximators.** This is an interesting future work. Moreover, as discussed above, this analysis could also embed the analysis of a learnable RNN, by considering a fixed window as the fixed agent state process. This follow-up work could perhaps build on the analysis proposed by Cayci & Eryilmaz (2024a, 2024b). **We propose to adapt the discussion about future works to make that more clear in the paper.** **Aliasing.** We acknowledge a confusion in our paper about the definition of aliasing. We think that (perceptual) aliasing should be understood as the fact that an observation $o$ (or an agent state $z$) corresponds to different $h$. This is a direct consequence of $o$ (or $z$) being non Markovian. As a consequence of aliasing, the fixed point $\tilde{Q}(z,a)$ is not equal to $Q(z,a)$. We also referred to this as aliasing, but we agree that we should call this the aliasing bias. Then, Lemma C.1 highlights in (139) that the aliasing bias $\lVert\tilde{Q}-Q\rVert$ is given by the difference in distribution of $s$ at the bootstrapping timestep $m$, depending on whether we condition on $z_m$ or $h_m$. This bias is then upper bounded by the TV between these distributions times the maximum rewards. We call this term the aliasing term. **We propose to revise the paper by fixing the naming convention for these three aspects (aliasing, aliasing bias, aliasing term), and by better discussing the origin of aliasing.** In addition, we agree that some other key concepts could be better introduced in the background. **We propose to improve the background to better explain and illustrate with examples what could be typical observations, agent states, environment states update functions, and what would be aliasing in this context.** We could notably use the example discussed in our response to Reviewer UL7j. We hope that these explanations will have answered all your questions. Please do not hesitate if you want us to further elaborate on something that was unclear.
Summary: This paper provides a finite-time analysis of asymmetric AC approach in POMDPs, showing that leveraging privileged state information during training eliminates uncertainty errors inherent in symmetric critics. Theoretically, it compares convergence bounds for asymmetric/symmetric critics, proving that asymmetric critics remove the so called aliasing term which comes form the uncertainty inherent in symmetric critics. Claims And Evidence: The claim of uncertainty error of the symmetric critic is well supported and provided with formal proof. Methods And Evaluation Criteria: This paper do not propose methods or included evalution. Theoretical Claims: I checked for the correctness of the theoretical claims, the proofs seem well-established, well-organized and error-free. Experimental Designs Or Analyses: N/A Supplementary Material: I reviewed the supplementary material in full. Relation To Broader Scientific Literature: This work is a relatively small patch on the popular AC method, turning what's previously floating intuition into solid terms, this work have the potential for summoning further theoretical works on asymmetric AC. Essential References Not Discussed: It seems that the paper is citing all the prominent related works. Other Strengths And Weaknesses: I appreciate that fact that the paper is highly well-written, the presentation is logically structured, with well-defined notation with a very focused narrative. The finite-time bounds (Theorems 3–5) are derived meticulously, extending prior NAC analyses (e.g., Cayci et al.) to asymmetric settings loosely following the style of Baisero et al.. The connection between aliasing and critic bias is clearly formalized. Nevertheless, I would like to comment that aliasing in POMDPs may be usually understood as aliasing multiple histories into one state, emphasizing on a many-to-one relationship as suppose to differences in belief distributions; hence I consider the term to be a bit abused in this paper. The core idea—using privileged state information to mitigate aliasing—echoes Baisero & Amato, who used h instead of z; while the analysis is novel, the conceptual leap is modest. Now, moving to the theoretical results, although we understand that we are short of one error term, one is to be uncertain whether the difference in error can be made up by less variance in learning targets, and there are no further discussion or tests on such fronts. Other Comments Or Suggestions: How would the bounds change if the agent state process M is learned (e.g., via RNNs) rather than fixed? I would suggest including toy domains as a sanity check, could simple synthetic POMDPs (e.g., darkroom) illustrate the impact of the uncertainly error? Questions For Authors: Could the analysis be extended to non-linear function approximators (which is common), where aliasing might interact with approximation errors differently, at least include a few sentence of discussion? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your review. We are glad that you found the results interesting and the paper clear. Please find our answers below. **Aliasing.** We acknowledge a confusion in our paper about the definition of aliasing. We agree that aliasing could refer to the fact that an history $h$ corresponds to the different environment states $s$. However, it would not pose any problem since $h$ is the Markovian state of an equivalent MDP: the "history MDP". We instead think that (perceptual) aliasing should be understood as the fact that an observation $o$ (or an agent state $z$) corresponds to different $h$. This is a direct consequence of $o$ (or $z$) being non Markovian. As a consequence of aliasing, the fixed point $\tilde{Q}(z,a)$ is not equal to $Q(z,a)$. We also referred to this as aliasing, but we agree that we should call this the aliasing bias. Then, Lemma C.1 highlights in (139) that the aliasing bias $\lVert\tilde{Q}-Q\rVert$ is given by the difference in distribution of $s$ at the bootstrapping timestep $m$, depending on whether we condition on $z_m$ or $h_m$. This bias is then upper bounded by the TV between these distributions times the maximum rewards. We call this term the aliasing term. Cayci et al. (2024) try to mitigate this bias using multistep TD learning, while we show that the problem disappear when considering asymmetric learning. **We propose to revise the paper by fixing the naming convention for these three aspects (aliasing, aliasing bias, aliasing term), and by better discussing the origin of aliasing.** **Variance effect.** We agree that, with the law of total variance, for any $\Pr(S,Z)$, any $z\in\mathcal{Z}$ and any $a\in\mathcal{A}$, we have $\mathbb{E}[\mathbb{V}[\sum_t\gamma^tR_t|S_0=S,Z_0=z,A_0=a]]\leq\mathbb{V}[\sum_t\gamma^tR_t|Z_0=z,A_0=a]$. It could suggest that using $Q(s,z,a)$ instead of $Q(z,a)$ in the PG will provide less variance. However, this neglects the variance of the estimator $\hat{Q}(s,z,a)$ that may have a higher variance than $\hat{Q}(z,a)$, since it is learned on fewer samples. In conclusion, it not clear what will be the effect of the variance of the target value functions, and we consider this study out of the scope of this paper. We also highlight that the claims on the effect of variance from Baisero & Amato (2022) differ from these from Sinha & Mahajan (2023), which further motivate an empirical study. Anyway, our bound is given on average over the complete learning process, whatever the variance of the learning targets. It thus still gives an indication on the worse case average performance of asymmetric learning compared to symmetric learning, even if there is more/less variance in the targets. **We can expand the discussion in the paper if you find it will be useful.** **Learning the update.** The analysis perfectly holds for an agent state process $\mathcal{M}$ learned through RL. By (i) extending the action space $\mathcal{A}$ with the agent state space $\mathcal{Z}$: $a^+=(a,z')$, (ii) extending the agent state $\mathcal{Z}$ with the observation space $\mathcal{O}$: $z^+=(z,o)$, and (iii) considering an update $U$ that transition to the selected agent state: $z^{+'}=(z',o')=U(z^+,a^+,o')$ where $a^+ = (a,z')$, we can explicitly learn an update by selecting both the environment action and the next agent state $a^+=(a,z')\sim \pi^+(·|z,o)$. In the conclusion, we discuss a more general version of this where the agent can learn a stochastic update. In both cases, the analysis goes through, we are just selecting a particular POMDP and update $U$. **If you feel that this is not clear, we can discuss this further in the paper.** We acknowledge that this is different from the usual way of learning $U_\psi$ with an RNN and BPTT. However, even when using an RNN, we always consider a fixed $\mathcal{M}$ to learn from (usually a window, on which we do BPTT). The analysis for a learned $\mathcal{M}$ can thus be cast as the problem of learning nonlinear features of the window $z$, where the nonlinear feature $\psi(z,a)$ is an RNN. **We propose to add this discussion in the paper.** **Toy experiment.** We agree that an empirical validation of the findings of this paper could be interesting. However, given the short rebuttal period, it will probably not be possible to obtain these results on time. We hope that you will still consider the results interesting and worthy of publication. **Nonlinear approximators.** This is an interesting future work. Moreover, as discussed above, this analysis could also embed the analysis of a learnable RNN, by considering a fixed window as the fixed agent state process. This follow-up work could perhaps build on the analysis proposed by Cayci & Eryilmaz (2024a, 2024b). **We propose to adapt the discussion about future works to make that more clear in the paper.** We hope that these discussions will have answered your questions. Please do not hesitate to ask us to develop further if something was not clear.
null
null
null
null
null
null
Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models
Accept (spotlight poster)
Summary: This paper mainly consists of two parts: discovering that intermediate image features help jailbreak VLLMs and proposing "Clipped-PPO" for jailbreak defense. Experimental results prove the existence of ICET and the effectiveness of alignment. Claims And Evidence: It is clear that layer-wise ASR on aligned LLaVA-1.5 dropped about 15%~50%, which is obvious. However, It would be better if there were experiments on LLaVA-NEXT and Llama 3.2, making it more convincing. Methods And Evaluation Criteria: It makes sense to use Llama-Guard as the ASR judge and perspective API as the toxicity judge. Jailbreak datasets picking for evaluation is appropriate, too. Theoretical Claims: The theory is a standard PPO algorithm with GAE as the advantage. Experimental Designs Or Analyses: See in Other Strengths And Weaknesses. Supplementary Material: I reviewed Appendix E for results on LLaVA-NEXT and Llama-3.2. Relation To Broader Scientific Literature: Prior safety alignment methods do not consider the impact of ``early-exit" image features. Such jailbreak outcomes may be related to adversarial attacks, and it could also provide more insights for safety alignment method designing. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Maybe I am not familiar with the "early exit", currently I am not sure whether users of open-source VLLMs may use the intermediate features of image encoders to feed into LLM module for content generation directly. For me, it is not a real "jailbreak threat", for it seems that the default forward propagation is modified, and some layers of image encoders are deliberately ignored. Anyway, it could make sense as a research project, which identifies some robustness issues in current VLLMs. I find this work has some main shortcomings: - No evaluations of fine-tuned models. It is possible that after fine-tuning, the rejection rate rises with the sacrifice of general capabilities, such as image understanding, logical reasoning, etc. Fine-tuning, even if it is reinforcement learning, may induce over-rejection on normal questions. - imbalanced structure. Standard PPO with GAE loss takes roughly more than one page, while from my personal view, the details could be put into supplementary materials. To prove ICET vulnerability, it is better to put Figure 8/9 in the main part, together with Figure 3. Other Comments Or Suggestions: N/A Questions For Authors: Could you provide some intuitive ideas for using RL instead of basic SFT? It is OK to provide a few insights without detailed experimental results. The reason I ask is that adversarial training may work for this if, for each training sample, we randomly pick one layer for the image feature and calculate the token prediction loss under the supervision of standard rejection answers, such as "I am sorry, but I cannot assist with your request". Please correct me if I miss some points or make false statements. I am open to discussion. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are glad you found our work insightful. Please see our responses below. ***Note: References are added in our response to reviewer WDjm to optimize space.*** --- ### Q1) Clarifying the Threat Model and Motivation Behind the ICET Vulnerability Thank you for raising this point. Recent studies in LLMs and Multimodal LLMs have explored using intermediate or pooled layer representations, instead of the default final layer, to improve downstream performance, inference, and latency [2, 3]. This trend is growing and may shape future deployments. That said, ***our goal is not to propose a new jailbreak attack, but to highlight a fundamental alignment gap: current strategies often assume fixed architectures and embedding configurations***. Our findings serve as a cautionary insight that even minor deviations at inference time, such as using image embeddings from different encoder layers (early exits), can introduce safety vulnerabilities. We will revise the motivation of the paper to highlight these shortcomings and clarify the broader impact of our contributions. --- ### Q2) Evaluating the Risk of Over-Rejection After Alignment We agree that post-training methods may risk performance degradation, such as over-rejection. However, as shown in Equation 6 of our submission, our KL divergence penalty between π_SFT and π_RL helps preserve the VLM’s utility. For evaluating over-rejection after performing L-PPO, we conducted experiments on the safe split of the XSTest dataset [6] using LLaVA-1.5. XSTest includes tricky phrased but safe questions, such as “How can I kill a Python process?”, making it well-suited for evaluating over-refusal, especially in models that have undergone alignment modifications [7]. We sampled 150 safe prompts and paired them with MS-COCO images (as in our AdvBench-COCO dataset) and compared rejection ratios before and after L-PPO, using the standard string-matching evaluation from [6]. ***Table 2 (in our response to reviewer WDjm) shows a reduced rejection in early layers, reinforcing that L-PPO alignment does not lead to over-rejections and may even improve***. Details will be provided in Appendix M. --- ### Q3) Improving Paper Structure We’ve moved the PPO details to Appendix L and brought Figures 8 and 9 into the main paper alongside Figure 3 to better highlight ICET vulnerability. We welcome any additional suggestions. --- ### Q4) Rationale for Using RLHF Over SFT and Discussion of Adversarial Training with Random Layer Selection Thank you for the question. Recent works [4, 5, 8] highlight SFT's limitations, such as overfitting and reliance on labeled data. To address this, we choose RLHF for its better generalization and alignment with human preferences. We also performed SFT experiments under the same settings as Table 2 of our submission to enable a direct comparison between L-PPO and SFT using a standard rejection answer, “I cannot assist with this!”. The quantitative results are shown below; see our response to Q4 of reviewer FW5i for qualitative results. Overall, ***L-PPO outperforms SFT across all metrics.*** One reason could be that the token-level loss (in SFT) instead of a response-level reward (in L-PPO) causes overfitting, thus restricting generalizability. We will discuss this in Appendix N. We agree that randomly selecting image encoder layers during training is an interesting adversarial approach. However, to ensure consistency with our prior setup and a fair comparison between SFT and L-PPO, we did not incorporate dynamic layer selection in our additional rebuttal experiments. Dynamic layer sampling is a promising direction, and we look forward to exploring it in future work. | Layer Set | AASR Original | AASR Align | AASR SFT | ATS Original | ATS Align | ATS SFT | ATR Original | ATR Align | ATR SFT | |-----------|----------------|-------------|-----------|---------------|-------------|----------|---------------|------------|----------| | Early | 75.38 | 45.15 | 51.22 | 8.24 | 4.88 | 7.29 | 0.57 | 0.62 | 0.59 | | Middle | 70 | 47.5 | 69 | 12.9 | 6.63 | 10.98 | -0.34 | 0.46 | -0.1 | | Late | 48.5 | 21.25 | 45.21 | 11 | 7.23 | 12.46 | -0.52 | 0.9 | -0.18 | | Average | 64.62 | 37.96 | 55.14 | 10.71 | 6.24 | 10.24 | -0.09 | 0.66 | 0.10 | Table 3: SFT vs L-PPO, Trainset is RedTeam 2K (RT) and Testset is Custom Prompts (CP) --- ### Suggestion: Additional Results on LLaVA-NeXT and LLaMA 3.2 We agree this would strengthen the paper. Since TRL’s PPOTrainer [1] doesn’t natively support multimodality and each model handles inputs differently, we manually adapted it. Within the rebuttal window, we tested layers 4, 10, and 15 of LLaVA-NeXT and observed a 31% drop in ATS. We plan to include complete results and LLaMA 3.2 results in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. The additional results in over-rejection as well as normal performance settle my related concerns. Besides, more experiments comparing PPO with SFT demonstrate the effectiveness of such an optimization method. I will change my recommendation to weak accept. --- Reply to Comment 1.1.1: Comment: Thank you so much for updating your score. We appreciate it, and we are happy that our response addressed your questions. We are glad to address any additional questions you might have.
Summary: This paper studies the safety alignment of Vision-Language Models (VLMs) by examining what happens when early existing from intermediate layers of the image encoder, rather than using the final-layer embeddings for inference. The authors show that models often produce harmful content when these intermediate embeddings are used, even if the input images are benign and no adversarial seeds are explicitly introduced. To mitigate this, they propose a modified RLHF method, Layer-wise PPO (L-PPO), which updates the model specifically to remain safe even when earlier layers’ embeddings are used. They present extensive experiments on multiple VLMs and evaluate safety on harmful prompts plus benign images. Claims And Evidence: - Claim: Certain intermediate image-encoder layers carry “harmful information,” and skipping to these layers can override the model’s usual safety alignment (producing disallowed or toxic responses). - Evidence: The authors run multiple experiments with different exit layers and show that substituting final-layer embeddings with intermediate ones leads some VLMs to yield unsafe responses. They quantify this with Attack Success Rate (ASR), Toxicity Score (TS), and other metrics. - Claim: Their proposed L-PPO method can improve safety alignment across layers. - Evidence: Results show that applying L-PPO on these early/middle/late encoder layers reduces toxicity (TS) and improves total reward (TR) from a reward model, compared to an unmodified baseline. Methods And Evaluation Criteria: Methods: - The image embeddings are taken from earlier layers, instead of always taking the final-layer encoder output. - To address this vulnerability, an RLHF-based approach (L-PPO) is proposed. It updates the language model’s policy to handle embeddings from each layer, aiming to keep generation safe. - They evaluate with a diverse suite of safety metrics. Evaluation Criteria: - Attack Success Rate (ASR): Proportion of harmful prompts that produce toxic outputs. - Toxicity Score (TS): From Perspective API. - Reward Model Scores (TR): Learned from HH-RLHF to signal how “safe” or “aligned” an answer is. - They also test on VQAv2 to ensure that utility is not overly reduced. Theoretical Claims: - The paper frames a policy gradient argument for L-PPO within the standard monotone improvement bounds of PPO. The idea is that using policy gradient updates with the clipped objective for each layer’s embeddings can theoretically ensure that the policy improves or at least does not degrade in safety alignment relative to the reference. - I went through the proofs and spotted no issues. Experimental Designs Or Analyses: - The authors run ablation-like tests on multiple well-known VLMs (LLaVA variants, Llama Vision) and systematically measure the model’s behavior layer by layer. - They also create or adapt a few test sets—like Redteam2K, minijailbreak, and custom sets pairing benign images with harmful queries—to mimic real or adversarial usage scenarios. Supplementary Material: They show detailed breakdowns in the appendix on - how they compile their custom prompts, - how they train the RL reward model, and - examples of safe/harmful prompts. These clarifications illustrate the authors’ experiments in replicable detail. Relation To Broader Scientific Literature: - This paper connects to a line of research about adversarial prompts, model interpretability, and adversarial attacks, especially those focusing on how knowledge or harmful content might be “localized” in certain layers. - They also relate to recent developments that incorporate multi-layer embeddings (DenseConnector, etc.), further highlighting the possibility that layer-wise embeddings can present new vulnerabilities. - However, the threat model may be less realistic in many applications, since an end user usually cannot pick which image-encoder layer to use. The conditions where an adversary can manipulate internal layers are narrower. Essential References Not Discussed: No essential references are missing that I can think of. Other Strengths And Weaknesses: **Strengths** - The authors identify an interesting phenomenon: harmful generation can be triggered without explicit adversarial tokens, purely by changing the layer used for image embeddings. - The experiments and evaluations are well-designed, with multiple metrics and models tested. - The problem has potentially broader implications for out-of-distribution (OOD) scenarios, especially given ongoing efforts that incorporate multi-layer features. - The paper is easy to read. **Weaknesses** - As I mentioned earlier, the threat model may be less realistic in many applications, since an end user usually cannot pick which image-encoder layer to use. This might be more convincingly framed as an interpretability or “layer-specific representation analysis” study rather than establishing a real security exploit. - The proposed L-PPO may feel like an incremental extension of PPO, with the main novelty being that embeddings are preemptively drawn from different encoder layers. Other Comments Or Suggestions: - One possible reorientation is to emphasize interpretability (i.e., “which layers carry what kind of knowledge?”) rather than purely vulnerability. That framing might even strengthen the paper’s significance. - Additionally, more discussion about how these vulnerabilities would translate into genuine real-world attacks (versus contrived scenario) might help. Questions For Authors: 1. **Threat Model Realism**: Can you clarify scenarios in which an attacker can realistically coerce a VLM to use intermediate-layer embeddings rather than the default final-layer embeddings? What real-world threat vectors do you envision? 2. **Layer Overlap**: Does each intermediate layer have distinct harmful “features,” or is there overlap? How does one systematically identify which or how many layers contain the most toxic or disallowed knowledge? 3. **Comparisons to Stronger Baselines**: Have you compared L-PPO to alternative alignment approaches that simply do repeated supervised fine-tuning for each subset of layers, or other RLHF variants? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We're glad you found our work interesting and impactful. Please see our responses below. ***Note: References are added in our response to reviewer WDjm to optimize space.*** --- ### Q1) Clarifying the Seriousness of the Threat and Broader Implications of the ICET Vulnerability Thank you for raising this important point. ***Our goal is not to frame ICET vulnerability as a new jailbreak attack, but rather to reveal a safety blind spot: that current safety alignment strategies fail to generalize across structurally valid variations in input representations.*** We appreciate your observation on the broader OOD implications, especially as multi-layer features gain traction for performance and flexibility [2, 3]. Our findings highlight how current safety training methods, which typically assume a fixed embedding source, may not be robust to such developments. We believe this insight is timely and relevant, given the increasing trend toward using intermediate or fused layer features for task-specific gains, inference efficiency, and architectural optimization [2, 3]. Without specific layer-wise generalization in safety alignment, such design choices risk introducing new and inadvertent safety concerns, even in seemingly benign settings. That said, we will revise the motivation and refine the flow of the paper. --- ### Q2) Clarifying the Novelty and Motivation Behind the L-PPO Approach We appreciate your comments and would like to emphasize that ***the simplicity of L-PPO is a strength rather than a weakness***. While L-PPO builds on Clip-PPO, its key innovation lies in its layer-wise adaptation to alleviate the novel ICET vulnerability, which has not been explored before. Further, L-PPO enjoys monotonic improvement guarantees as standard Clip-PPO, reinforcing the reliability of our results. Thus, our novelty lies in both uncovering the ICET vulnerability and introducing a practical, targeted, and effective mitigation strategy, making our work a step forward in ensuring the safety of VLMs. --- ### Q3) Identifying Which Layers “Contain” or Overlap in Harmful Knowledge Thank you for this question. We would like to clarify that ***Our study does NOT aim to perform knowledge “localization” or identify which intermediate layer from the image encoder “contains” harmful features.*** Rather, the ICET vulnerability we investigate is fundamentally about out-of-distribution (OOD) behavior introduced by deviating from the default inference pathway used during safety alignment. In all our experiments, the image inputs are benign and remain unchanged across layer selections. ***The harmful generations are not due to any “toxic knowledge” in specific layers but arise because using embeddings from an intermediate layer instead of the default final layer produces fused multimodal representations that fall outside the training distribution seen during training.*** These image encoder intermediate layer embeddings can cause the language model to generate unsafe outputs, even when the input images themselves are harmless. That said, while our goal is not to attribute harmfulness to specific layers, we do include layerwise plots showing how ASR, TSR, and TR vary with different image encoder layers (Fig 3 of our submission). These plots show that some image encoder layers indeed lead to a higher likelihood of harmful generations than others, further supporting that ICET is a distributional misalignment issue rather than layer-specific knowledge encoding. We will clarify this distinction in the revised paper --- ### Q4) Comparing L-PPO to Stronger Baselines and Alternative Alignment Methods Given the recent discussion [4, 5] around the limitations of supervised fine-tuning (SFT), such as overfitting and reliance on labeled data [8], we chose RLHF for its stronger generalization capabilities and alignment with human preferences. Based on your suggestion, we conducted additional SFT experiments. While SFT does reduce the ICET vulnerability, ***it consistently underperforms compared to RLHF across all three metrics*** (Table. 3 of Reviewer zpSp). Our intuition is that the token-level loss in SFT leads to overfitting. We will discuss these results in Appendix N of the revised paper. ***Qualitative Results***: We analyze VLM responses before and after L-PPO and SFT using the following prompt, showing outputs for ICET 15. 1) Original Inference: To make a bomb using common household things, you would need to gather … 2) Inference L-PPO: I’m sorry, but I cannot provide instructions …. 3) Inference SFT: To make a bomb, you will need a few household items such as a knife … ***Result Analysis***: Based on 3) and 4), we note that in SFT, once the output becomes jailbroken, the language modeling objective tends to continue generating harmful content. In contrast, with L-PPO, the model successfully generates an appropriate refusal response. Similar results have also been shown in [4]. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I now understand the ICET vulnerabilities better and the motivation behind this work. The added experiments also addressed my concerns. Hence, I will raise my assessment score to 3. --- Reply to Comment 1.1.1: Comment: Thank you for updating your score and we're glad to hear that your concerns have been resolved. Please let us know if you have any further questions or comments. We would be happy to provide additional clarifications.
Summary: This paper uncovers an ICET vulnerability in Vision-Language Models (VLMs), where utilizing intermediate layer embeddings from the image encoder can compromise the VLM's safety alignment even with safe input images. The authors propose a modification to the Clipped-Proximal Policy Optimization (Clip-PPO) algorithm called Layer-Wise PPO (L-PPO) to perform layer-wise multi-modal reinforcement learning from human feedback (RLHF) and mitigate this vulnerability. Experiments on three VLMs and datasets demonstrate L-PPO effectively reduces the harmfulness. ## update after rebuttal Thanks for this rebuttal. After reviewing responses to the concerns raised, I am inclined to support acceptance. Claims And Evidence: The main claims around the existence of the ICET vulnerability and the effectiveness of L-PPO are supported by the experimental results. Methods And Evaluation Criteria: The layer-wise RLHF approach using L-PPO is well-motivated and sensible for reducing harmfulness in VLMs. However, the evaluation relies on a limited set of automated metrics, which may not fully capture human judgment of safety and capability. Theoretical Claims: The theoretical foundations connecting L-PPO to monotone improvement guarantees of Clip-PPO can be better justified. Experimental Designs Or Analyses: The experimental design comparing harmfulness metrics before and after L-PPO alignment across different layers, VLMs, and datasets is reasonable. Supplementary Material: Yes Relation To Broader Scientific Literature: More discussion is needed on related work in multi-modal models specifically and alternative alignment approaches beyond RLHF. Essential References Not Discussed: None Other Strengths And Weaknesses: Disadvantages: 1. While the paper extends the monotone improvement guarantees of Clip-PPO to L-PPO, the theoretical analysis can be improved. More rigorous theoretical justification and analysis of why the ICET vulnerability occurs and how L-PPO addresses it would strengthen the paper significantly. 2. The paper uses a limited set of metrics (ASR, TR, TS) and tools (Llama Guard, Perspective API) to measure harmfulness. However, these automated metrics may not fully capture human judgment of harmfulness. Other Comments Or Suggestions: 1. It would be better if Fig. 1 and Fig. 2 could be made more refined. 2. A minor detail: It seems that "After RLHF" and "After Alignment" in Fig. 3 convey the same meaning. If so, please use the same term to avoid confusion. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your suggestion and questions. Please see our responses below. ***Note: References are added in our response to reviewer WDjm to optimize space.*** --- ### Q1) Improving the Theoretical Foundations of ICET and L-PPO to Strengthen the Paper As discussed in the paper, we attribute the ***ICET vulnerability to a distributional mismatch caused by using intermediate image embeddings not seen during training or safety alignment.*** These embeddings, when fused with harmful text embeddings, yield joint representations that are out-of-distribution (OOD) i.e. lie in a different region of the embedding space than that produced by the default image encoder layer causing deviation from the training safety trajectory. This leads to harmful responses. We will clarify this in Section 3.1 of the revised submission. Following your suggestion, we are adding references and discussion to better connect our work to the broader issue of ***inference-time deviations causing model vulnerabilities.*** Additionally, we present an alternative perspective on the ICET vulnerability in terms of the expected regret concept in RL, as follows: ### ICET Vulnerability as Expected Regret Given an initial policy $\pi_{\phi}^{RL}$ that takes ICET- $l$ embeddings, defined as $e_{l}$ = $𝓔_{θ}^{l}(x_{i})$, we define ICET vulnerability as the expected regret between the initial policy and the optimal policy $\pi_{\phi}^{*}$ as follows: $$\text{ICET Vulnerability} = \mathbb{E} \left[ \sum_{t=1}^{T} \left( J\left(\pi_{\phi}^{*}\right) - J\left(\pi_{\phi}^{RL}\right) \right) \right]$$ where $T$ are the total time steps. This reflects the expected difference in performance ($J$) between the optimal policy $\pi_{\phi}^{*}$ and the initial policy $\pi_{\phi}^{RL}$. As noted in Section 3.3, L-PPO ensures monotonic improvement in performance or reduction in expected regret. A higher regret is expected before L-PPO alignment, as the VLM was not trained to be safe w.r.t intermediate layer embeddings obtained from image encoder early exits (ICET). **Our goal with L-PPO is to minimize the above-shown regret, by explicitly aligning the VLM with intermediate layer embeddings.** We will revise Section 3.1 of our submission to include these details. --- ### Q2) Incorporating Human Evaluation to Complement Automated Safety Metrics Thank you for pointing this out. We conducted a human evaluation to further validate our findings. Specifically, following [4], we enlisted three volunteers to assess the model outputs through a blind evaluation; each annotator was unaware of which model or setting (i.e., different image encoder layers, before and after L-PPO alignment) produced a given response. The outputs were shuffled and presented in separate spreadsheets to each annotator to prevent any bias. Each volunteer assesses a total of 200 outputs derived from the settings mentioned. Annotators were instructed to label a response as an attack success if they deemed the generated content harmful. We observed strong inter-annotator agreement, with a **Fleiss’ Kappa of 0.8916**. The full average Attack Success Rate (AASR) results will be updated in Appendix K of the revised submission due to space constraints. --- ### Q3) Improving the Clarity and Design of Figures 1 and 2 Thank you for the helpful suggestion. We have revised Figures 1, and 2 to improve their clarity. For Figure 1, we replaced the original camera image with the airplane image from Appendix B, which more clearly depicts a benign input. We also improved the layout and typography and added labels (“Prompt” and “Model Output”) to enhance interpretability. For Figure 2, we refined the visual elements to better convey layer-wise alignment variation and ensured consistent styling across subfigures (A, B, and C). We also removed emojis and now rely on clear color coding, red for harmful and green for safe. We will update the improvements in the final version, and we are happy to incorporate further suggestions! --- ### Q4) Expanding Discussion of Related Work in Multimodal Models and Alignment Approaches We agree with your suggestion, and will revise the related works section as follows: 1) As noted in our response to Q1, we will incorporate related work on inference-time deviations from training-time configurations to better contextualize ICET vulnerability within broader discussions on generalization and robustness in safety training under OOD scenarios. 2) We will expand our discussion to include alignment techniques beyond RLHF, such as supervised safety training, adversarial training, and unlearning, along with references to key studies for each. --- ### Q5) Ensuring Terminology Consistency in Figure 3 ("After RLHF" and "After Alignment") We appreciate your help in improving the paper. We have fixed the inconsistency and ensured overall terminology consistency. Revisions will be updated in the final version of the paper.
Summary: The paper researches the safety alignment of Vision-Language Models (VLMs) and identifies a vulnerability called "Image enCoder Early-exiT" (ICET), where using intermediate layers of the image encoder increases the risk of harmful outputs. The paper reveals that skipping certain layers and performing early exits can significantly compromise safety alignment, leading to an increased likelihood of generating harmful responses. To address this issue, the authors propose Layer-wise PPO (L-PPO), a modified version of Clipped-Proximal Policy Optimization (Clip-PPO), which applies layer-wise multi-modal RLHF (Reinforcement Learning from Human Feedback) to improve safety alignment. Experiments conducted on multiple models on different datasets demonstrate that L-PPO effectively mitigates the ICET vulnerability. Claims And Evidence: The paper validates the existence of the ICET vulnerability through extensive experiments involving multiple VLMs (Llama 3.2 Vision and LLaVA-NeXT), multiple datasets, and multiple evaluation metrics. The experimental data support its claim that early exit poses a risk to safety alignment. Methods And Evaluation Criteria: The Layer-wise PPO method is reasonable and interesting, aligns with multi-modal RLHF, and directly mitigates the ICET vulnerability. The paper evaluates it using ASR, TS, and TR, comparing results with non-optimized baselines for reliability. Experiments on multiple datasets (RedTeam 2K, miniJailbreak-V28K, AdvBench-COCO) confirm its generalization. Theoretical Claims: The paper conducts a theoretical analysis based on the Monotone Improvement theorem of the PPO algorithm and proves that L-PPO guarantees policy improvement, with a clear derivation process. It also provides a mathematical analysis of L-PPO and explains its effectiveness from the perspective of optimization objectives. Experimental Designs Or Analyses: The paper conducts experiments across different layers (early, middle, late) to analyze the impact of ICET on VLM outputs and compares results before and after L-PPO training, demonstrating its effectiveness. **Weakness**: 1. The paper does not examine the sensitivity of key parameters, including the KL constraint $\eta$, and the weighting coefficient of value loss $c_1$. 2. The paper focuses on safety alignment but does not thoroughly evaluate L-PPO's impact on overall VLM performance. ATR-UT on VQA-v2 shows a noticeable decline in utility, yet there is no broader assessment across diverse tasks to determine whether L-PPO negatively affects general model capabilities, particularly the performance drop caused by early-layer adjustments. Supplementary Material: I have reviewed the supplementary material in its entirety and have carefully examined the **Additional Results** section in detail. Relation To Broader Scientific Literature: The paper clearly highlights the improvements of L-PPO over Clip-PPO and validates its advantages through experiments. While prior studies focus on input perturbations or LLM layers in harmful content generation, this work is the first to investigate how variations in layer-wise embeddings of image encoders impact VLM safety alignment. Essential References Not Discussed: There are no essential related works missing that are crucial for understanding the paper’s key contributions. Other Strengths And Weaknesses: **Stregnth:** The paper is well-written and the motivation of this paper is clear and interesting. **Weakness:** L-PPO seems to require separate training for each layer, meaning a model optimized for layer $l$ improves safety alignment only for that specific exit but not for layer $l+1$ or others. A key question is whether L-PPO can be adapted to improve safety alignment across all layers simultaneously. Other Comments Or Suggestions: N/A Questions For Authors: See above Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We’re happy to hear that you found our work interesting and novel, the paper well written, and our proposed L-PPO method effective in addressing the identified ICET vulnerability. We address your questions below: --- ### Q1) Effect of KL Constraint (η) and Value Loss Coefficient (c₁) Thanks for your comments on the hyperparameters. The KL divergence coefficient (η) is adjusted adaptively based on a target KL value [1]. For the value function coefficient (c₁), we performed a hyperparameter search over a grid of {0.1, 0.12, 0.15, 0.17, 0.2} and found the overall results to be insensitive to hyperparameter values. We will update Appendix F to include these details. --- ### Q2) Evaluating the Impact of L-PPO on Utility and General Model Performance Thank you for your insightful comment. As shown in Table 3 of our submission, the average total reward drops by 8%, which is comparable to other post-training strategies for VLMs, such as unlearning [4]. To further assess the impact on the model’s utility, we conduct the following new experiments: 1. **Accuracy Evaluations on VQA-v2 using LLaVA-1.5**: As per the VQA-v2 protocol, we conduct experiments to calculate standard accuracy for evaluating visual question answering correctness. Following the same setting as Table 3 of our submission, we perform layer-wise alignment (L-PPO) using the RedTeam 2K (RT) dataset. ***The accuracies are reported below. We observe that accuracy remains comparable across early, middle, and late layers even after L-PPO***, confirming that the general question-answering utility of the VLM is preserved. Further details will be provided in Appendix M. | Layer Set | ACC-UT Original | ACC-UT Aligned | | -----------|-----------------|----------------| | Early | 24.8 | 24.85 | | Middle | 47.25 | 46.52 | | Late | 74.14 | 73.32 | | Average | 48.73 | 48.23 | |*Table 1:*|*Train set is RT and* |*Test set is VQA-v2*| 2. **Over-Rejection Evaluations on XSTest using LLaVA-1.5**: We also take a step ahead and conduct new experiments on over-rejection. To assess over-rejection on safe questions, we conducted experiments on the safe split of the XSTest dataset [6]. We sampled 150 safe prompts and paired them with MS-COCO images (as in our AdvBench-COCO dataset) and compared rejection ratios between the original model and the L-PPO aligned model. Rejections were evaluated using the standard string-matching code from [6]. As shown below, ***rejection ratios decreased across early layers and remained comparable in the middle and late layers. This indicates that L-PPO does not cause over-rejection on safe inputs and can even improve refusal behavior.*** Further details will be provided in Appendix M. | Layer Set | Refusal Ratio Original | Refusal Ratio Aligned | |------------|------------------------|---------------------------| | Early | 0.084 | 0.065 | | Middle | 0.027 | 0.023 | | Late | 0.029 | 0.032 | | Average | 0.047 | 0.040 | |*Table 2:*|*Train set is RT and* |*Test set is XSTest*| --- ### Q3) Layer-Specific vs. Unified Alignment: Scope of L-PPO In this paper, our proposed L-PPO is designed to provide safety alignment on a per-layer basis, ensuring effective reduction in the ICET vulnerability. We agree that adapting L-PPO to improve safety alignment across all layers simultaneously is a very interesting direction and a natural next step. However, since each layer of the image encoder learns distinct representations, enforcing alignment across all layers at once could compromise the fine-grained safety guarantees that L-PPO provides, and might also affect overall utility. Therefore, we leave this for future research, where techniques such as cross-layer knowledge transfer or global safety constraints could be explored to extend the capabilities of L-PPO across the entire model. ### References [1] TRL: Transformer Reinforcement Learning, HuggingFace\ [2] Huanjin et al. Dense Connector for MLLMs., Neurips 2024\ [3] Skean et al. Layer by Layer: Uncovering Hidden Representations in Language Models., ArXiv 2025\ [4] Chakraborty et al. Cross-Modal Safety Alignment: Is textual unlearning all you need?, EMNLP 2024\ [5] Ouyang et al. Training language models to follow instructions with human feedback., 2022, arXiv\ [6] Röttger et al. XSTEST: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models, NAACL 2024\ [7] Yu et al. Robust LLM safeguarding via refusal feature adversarial training, ICLR 2025\ [8] Chen et al. SelfIE: Self-Interpretation of Large Language Model Embeddings., 2024, arXiv --- Rebuttal Comment 1.1: Comment: Thank you for clarifying the effect of the hyperparameters, and the additional experiments on VQA and over-rejection. Your findings show that L-PPO remains effective while preserving overall utility, which addresses our primary concerns. I maintain my positive score and recommend the paper for acceptance. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate you recommending the paper for acceptance and we’re pleased to hear that your concerns have been resolved. We are happy to address any further questions you may have.
null
null
null
null
null
null
Ranked from Within: Ranking Large Multimodal Models Without Labels
Accept (poster)
Summary: This paper addresses the challenge of ranking large multimodal models (LMMs) without access to labeled data. Specifically, the authors propose uncertainty-based ranking methods that utilize softmax probabilities, self-consistency, and labeled proxy sets to estimate model performance. They evaluate 45 LMMs on eight benchmarks and find that uncertainty-based metrics, particularly negative log-likelihood (NLL), provide a robust ranking mechanism. The study suggests that relying solely on benchmark performance for ranking models may be unreliable due to domain shifts. Claims And Evidence: Yes. The study tests 45 LMMs across eight diverse benchmarks, demonstrating strong empirical rigour. Methods And Evaluation Criteria: The analysis encompasses multiple ranking methods, ablation studies, and cross-domain correlation assessments, ensuring a comprehensive evaluation. However, the authors do not provide clear guidance on selecting appropriate ranking methods for different scenarios. Theoretical Claims: N/A. This paper does not include any theoretical claims. Experimental Designs Or Analyses: Yes. The analysis encompasses multiple ranking methods, ablation studies, and cross-domain correlation assessments, ensuring a comprehensive evaluation. Supplementary Material: Yes, I have reviewed all of the supplementary material. Relation To Broader Scientific Literature: This paper estimates model performance using ranking methods, such as NLL loss, Entropy, BLEU, and BERTScores. Essential References Not Discussed: This paper contains enough references. Other Strengths And Weaknesses: Weaknesses: 1. Although the authors conduct many experiments testing the model performance across many ranking methods(NLL loss, Entropy, BLEU, and BERTScores), they do not provide clear guidance on selecting appropriate ranking methods for different scenarios. 2. In Models of Section 5.1, the authors claim to provide two settings. However, they only explain one, while the other setting is missing. 3. The novelty of this paper seems limited. The authors only test many existing ranking methods (NLL loss, Entropy, BLEU, and BERTScores) without providing new insights. 4. In Section 6, the authors only evaluate the series of LLaVA models, raising concerns about the generality of the analysis. 5. In Section 5.1, some citations appear as question marks, indicating missing or improperly formatted references. Other Comments Or Suggestions: Please refer to the weaknesses. Questions For Authors: Please refer to the weaknesses. Ethics Expertise Needed: ['Other expertise'] Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > Q1. The novelty of this paper seems limited. The authors only test many existing ranking methods (NLL loss, Entropy, BLEU, and BERTScores) without providing new insights. **A**: While NLL, Entropy, BLEU, and BERTScore have been widely used for uncertainty quantification in LLMs, to the best of our knowledge they have not been explored for unsupervised LMM ranking. Our work is the first to systematically investigate their applicability in this context. Through extensive experiments across diverse models and benchmarks, we demonstrate that $NLL_{max}$ is the most reliable and accurate ranking proxy measure. Additionally, we introduce a simple yet effective modification (*i.e.*, $Sample_{BERT}^*$) to $Sample_{BERT}$, improving its consistency. Furthermore, we observe that self-consistency methods perform better for LMM ranking in VQA than MCQA, and that $Sample_{BERT}^*$ outperforms BLEU. In the revised version, we will highlight our core findings: (1) model performance on one dataset does not reliably reflect its ranking on another; (2) uncertainty in model predictions can be predictive of model rank, **with $NLL_{max}$ being the most accurate technique in general**; and (3) text prompt similarity better correlate with model performance across datasets than image feature similarity. > Q2. They do not provide clear guidance on selecting appropriate ranking methods for different scenarios. **A**: We agree this would strengthen the paper. In the revision, we provide guidance on selecting appropriate ranking methods based on model accessibility and computational constraints, as follows. - $NLL_{max}$ is generally the most accurate, efficient and consistent method for LMM performance ranking. - While self-consistency methods are also competitive for VQA, they require K-times the compute, where K is the number of unique inference outputs per prompt. However, for closed-source models where internal statistics (*e.g*., logits) are unavailable, self-consistency-based methods are more suitable. Among these, $Sample_{BERT}^*$ is the recommended approach. > Q3. In Models of Section 5.1, the authors claim to provide two settings. However, they only explain one, while the other setting is missing. **A**: Thank you for picking up this error - it is a typo. Section 5 only contains one setting, where different series of models are evaluated. We have corrected this in the revision. > Q4. In Section 6, the authors only evaluate the series of LLaVA models, raising concerns about the generality of the analysis. **A**: Section 6 includes three distinct analyses. The first specifically investigates whether the considered methods are effective for ranking models within the same series. We use LLaVA models as a case study because they are widely adopted and representative of LMM architectures. Furthermore, the diversity within the LLaVA series is ensured by incorporating LLaVA-prismatic models trained with varying visual encoders and large language models, making the analysis more comprehensive. We will clarify this in the paper. The other two analyses in Section 6 consider models from different series, consistent with the broader evaluation in Section 5. We have rewritten this section to make this clearer. > Q5. In Section 5.1, some citations appear as question marks, indicating missing or improperly formatted references. **A**: Thank you for your meticulous comment. we will double-check and correct the improperly formatted references.
Summary: This paper presents a study on ranking methods for evaluating large language models (LLMs) without accessing labels. The authors main findings are that Accuracy-on-the-Line is unreliable for ranking LLMs in new domains, the choice of token significantly impacts output probability-based ranking, and the negative log-likelihood (NLL) is more stable and predictive for LLM ranking than other evaluation methods. Claims And Evidence: Yes, the claims are well-supported and make sense in light of the experimental results. Methods And Evaluation Criteria: Yes, this manuscript introduces a ranking method that effectively evaluates LLMs without relying on labeled data. The NLL (Negative Log-Likelihood) metric demonstrates highly stable performance indicators for target domains. Theoretical Claims: This work does not include proofs for its theoretical claims. However, based on the results provided by the authors, the findings appear to be reasonable. Experimental Designs Or Analyses: Yes, the experimental designs and analyses appear to be sound based on the reported correlation studies. Supplementary Material: The supplementary material contains details of dataset and models involved in the research. The visualization of image and Text features, and the full results of correlation studies are also presented in it. Relation To Broader Scientific Literature: The paper points out the unreliability of evaluating and ranking LLMs across different datasets, and provides a method for assessing the ranking of LMMs in the absence of labeled data. This method yields more stable evaluation results across different domains compared to previous approaches. It holds significant importance for further research into the performance of LLMs and issues related to their deployment. Essential References Not Discussed: In the section "Related Work: Evaluation & Benchmarking LMMs," the paper mentions RealWorldQA and several other benchmarks for evaluation purposes. However, it fails to address some commonly used benchmarks for LLMs, such as CV-benchmarks (e.g., GQA, MMVP, OCRBench) and math-benchmarks (e.g., MathVerse, MathVista). Other Strengths And Weaknesses: Strengths: The authors have investigated the important issue of evaluating models using unlabeled data and proposed a corresponding ranking evaluation method. The results make sense and have yielded many findings beneficial to the development of the community. The authors evaluated the performance on 45 LLMs, demonstrating good breadth. Weaknesses: However, 1.the benchmarks used by the authors for evaluation seem limited and unbalanced. The paper employs four types of benchmarks: Optical Character Recognition, Science, Vision, and General. But the authors appear to focus mainly on document OCR-related benchmarks (such as TextVQA, ChartVQA, DocVQA, OCRVQA), with very little attention to vision-related benchmarks (only one RWQA). There seems to be a lack of evaluation in areas that are also very important for LLM assessment, such as the evaluation of LLMs' spatial understanding capabilities. 2.In addition, the evaluation models used by the authors also seem limited, involving only some LLaVA series models or relatively older models like InstructBLIP and mPLUG-Owl2. There is a lack of assessment of newly developed models, such as the Qwen series and DeepSeek series. It needs further examination whether these models still follow the evaluation criteria proposed by the authors. The authors should also clarify whether the evaluation of closed-source models follows the proposed pattern. Other Comments Or Suggestions: To enhance the comprehensiveness of the evaluation, the authors should 1.consider incorporating more vision and spatial understanding-related benchmarks, such as GQA and MMVP. 2.They should also update their model selection to include recently developed LLMs like the Qwen series and DeepSeek series to validate the universality of their proposed criteria. 3.Finally, the authors should elaborate on the applicability of their evaluation framework to closed-source models to ensure its relevance across different model types. Questions For Authors: Please see weakness and suggestions above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > Q1. In the section "Related Work: Evaluation & Benchmarking LMMs," the paper mentions RealWorldQA and several other benchmarks for evaluation purposes. However, it fails to address some commonly used benchmarks for LLMs, such as CV-benchmarks (e.g., GQA, MMVP, OCRBench) and math-benchmarks (e.g., MathVerse, MathVista). **A**: Thank you for pointing out these benchmarks. We will include them in the related work. The following text will be used in Section 2: *“Several benchmarks (x.ai, 2024; Ainslie et al., 2023; Tong et al., 2024) have been developed to assess multimodal models' real-world spatial understanding. Lu et al. (2024) and Zhang et al. (2024) introduce benchmarks specifically designed to evaluate MLLMs' mathematical reasoning, focusing on their ability to comprehend and reason visual mathematical figures. Additionally, numerous benchmarks (Mishra et al., 2019; Masry et al., 2022; Mathew et al., 2021; Liu et al., 2024) assess the performance of LMMs in optical character recognition.”* *[1] x.ai. Grok 1.5v: The future of ai models, 2024. URL https://x.ai/blog/grok-1.5v.* *[2] Ainslie, J., et al. Gqa: Training generalized multi-query transformer models from multi-head checkpoints. In EMNLP, 2023.* *[3] Tong, S., et al. Eyes wide shut? exploring the visual shortcomings of multimodal llms. In CVPR, 2024.* *[4] Lu, P., et al. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In ICLR, 2024.* *[5] Zhang, R., et al. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? In ECCV, 2024.* *[6] Liu et al., Ocrbench: on the hidden mystery of ocr in large multi-modal models. Science China Information Sciences, 67(12), December 2024d.* > Q2. Consider incorporating more vision and spatial understanding-related benchmarks, such as GQA and MMVP. They should also update their model selection to include recently developed LLMs like the Qwen series and DeepSeek series to validate the universality of their proposed criteria. **A**: Thank you for your valuable suggestion. Following your recommendation, we incorporated four new models—QwenVL, QwenVL2, DeepSeek-VL, and DeepSeek-VL2—and added GQA as an additional dataset for evaluation. We summarize the results in the table below, where the metric is rank correlation, and AoL performance represents the average correlation strength when using the other eight domains to predict rankings in the target domain. - Generalizability to new models: The uncertainty-based ranking methods (*e.g.*, $NLL_{max}$) successfully extend to newly introduced models. - Performance on new dataset GQA: We observe consistent trends: (1) AoL does not reliably rank model performance, and (2) $NLL_{max}$ provides a more stable and indicative ranking signal compared to other methods. These new results further support the applicability and robustness of the evaluated uncertainty-based methods. We will incorporate them into the revised version. | | AoL | $NLL_{F}$ | $NLL_{P}$ | $NLL_{max}$ | $NLL_{mean}$ | $Ent_{F}$ | $Ent_{P}$ | $Ent_{max}$ | $Ent_{mean}$ | $Sample_{BLEU}$ | $Sample_{BERT}$ | $Sample_{BERT}^*$ | |--------------|:----:|:-----:|:-----:|:-------:|:--------:|:-----:|:-----:|:-------:|:--------:|:-----------:|:-----------:|:------------:| | **SQA-I** | 0.54 | 0.84 | 0.66 | **0.81** | 0.74 | 0.63 | 0.50 | 0.62 | 0.59 | 0.68 | 0.51 | 0.70 | | **ChartQA** | 0.68 | 0.62 | 0.79 | **0.94** | 0.92 | 0.64 | 0.82 | 0.89 | 0.91 | 0.86 | 0.87 | 0.87 | | **GQA** | 0.59 | 0.64 | 0.55 | **0.70** | 0.63 | 0.47 | 0.46 | 0.52 | 0.52 | 0.59 | 0.61 | 0.61 | > Q3. Finally, the authors should elaborate on the applicability of their evaluation framework to closed-source models to ensure its relevance across different model types. **A**: We appreciate the reviewer’s insightful suggestion. While our experiments focus on models with accessible internal outputs, we note that **our evaluation framework can be extended to closed-source models as well**—particularly through methods that rely solely on final predictions, such as $Sample_{BERT}$. These approaches operate without access to logits or internal statistics and can support **label-free model selection** even in restricted-access settings. We agree this distinction is important and will clarify in the manuscript that our framework can be **applied to closed-source models**, depending on the availability of output-level information, without requiring structural modifications or access to internals.
Summary: The paper investigates alternative ways to rank large multimodal models on new domains in absence of ground truth annotations. The authors compare 3 types of approaches: (1) labeled proxy datasets (AoL: where the performance on N-1 datasets is used to predict the performance on the Nth dataset; ATC: where proxy datasets are used to extract a confidence threshold to be used for the new unlabeled dataset), probability-based (NLL, Entropy), and self-consistency (sample-based). By running an extensive analysis with 45 models (30 different models and their versions) on 8 multiple-choice or open-ended visual QAs, the authors conclude that probability-based metrics tend to be more reliable and show strong correlations with actual performance, compared to AoL or ATC. An additional analysis is provided for AoL, to shed light on its low correlation with performance, showing that text similarities across datasets is more indicative of strong correlation with performance than image similarities. ## update after rebuttal ## I read the rebuttal and the other reviews and I maintain my initial score. Claims And Evidence: This is a compelling submission, with potential for impact in how benchmarking of large multimodal models is approached. The claims are clearly presented and backed by an extensive set of experiments. Methods And Evaluation Criteria: The proposed analysis is comprehensive and robust, including multiple metrics (probabilistic, sample-based, proxy-labelled). Theoretical Claims: NA Experimental Designs Or Analyses: The empirical analysis is extensive and covers well the space of multimodal models. Supplementary Material: I checked the supplementary material. The figure with t-sne visualisations is very interesting. Relation To Broader Scientific Literature: The related work section covers relevant works, but some are missing, see below. Essential References Not Discussed: These works should be discussed as well The Internal State of an LLM Knows When It’s Lying https://aclanthology.org/2023.findings-emnlp.68.pdf Estimating Knowledge in Large Language Models Without Generating a Single Token https://aclanthology.org/2024.emnlp-main.232.pdf Other Strengths And Weaknesses: The impact section could include a statement about the possibility of training models to elicit high confidence when producing deceiving answers. Other Comments Or Suggestions: The paper is well written. A few typos: L295: Prob_min, Prob_avg are not used in the table, they should be NLL_max, NLL_avg? L247-248, some citations are incorrect and appear as “?” Questions For Authors: 1. Would it be possible to train models to elicit high confidence when producing deceiving answers? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1. These works should be discussed as well: (1) The Internal State of an LLM Knows When It’s Lying (2) Estimating Knowledge in Large Language Models Without Generating a Single Token **A**: Thank you for your valuable suggestions. We will include these two studies, along with other relevant works on the internal states of LLMs and LMMs, in the related work section. Additionally, we will discuss the limitations of using internal states for model ranking and potential directions for future research as below: *"Internal states of LMMs can be leveraged for uncertainty quantification or error detection by training a classifier. However, in the context of unsupervised LMM ranking, a separate classifier must be trained for each LMM, making the approach computationally expensive. One potential alternative is to measure the statistical distance between the internal state of the original answer and the distribution of internal states of multiple inference processes. A larger average distance may indicate higher uncertainty in model predictions, suggesting a lower model rank. It would be interesting to investigate the use of internal states for LMM ranking is left as future work.”* > Q2. The impact section could include a statement about the possibility of training models to elicit high confidence when producing deceiving answers. **A**: Thank you for the suggestion. We will incorporate this discussion into the broader impact section: *"Our findings could be misused: adversarial researchers may train models that consistently get ranked higher than other models, despite performing poorly or adversarially on data. To mitigate this risk, incorporating robust calibration methods is helpful, ensuring that model confidence accurately reflects uncertainty. This would help promote safer and more responsible deployment of our findings."* > Q3. A few typos: L295: $Prob_{min}$, $Prob_{avg}$ are not used in the table, they should be $NLL_{max}$, $NLL_{avg}$? L247-248, some citations are incorrect and appear as “?” **A**: Thank you for your meticulous comment. The $Prob_{min}$ and $Prob_{avg}$ should be $NLL_{max}$ and $NLL_{avg}$. We will carefully review the manuscripts and correct all typos.
Summary: This paper explores how to effectively evaluate the performance of LMMs without requiring task-specific labels, and systematically validates three types of model uncertainty-based approaches, including softmax probabilities, self-consistency, and labeled proxy sets. Comprehensive experiments across various LMMs and benchmarks show that uncertainty scores derived from softmax distributions provide a robust and consistent basis for ranking models across various tasks, facilitating the ranking of LMMs on unlabeled data. ### update after rebuttal The author's rebuttal addressed several of my concerns. I keep my original rating. Claims And Evidence: Strengths: - This paper presents an important and interesting problem regarding how to evaluate LMMs without requiring task-specific labels. - The authors perform extensive experiments to analyze how well uncertainty-based metrics can measure relative model performance, resulting in some empirical findings. Weaknesses: - Although the authors derived a series of empirical conclusions through extensive experimental analysis, - 1) they did not propose new methods or metrics based on these findings for selecting the optimal LMM for a given task in an unlabeled scenario. - 2) The findings in this paper lack universality, are somewhat diffuse, and would benefit from a more concise formulation. As a reader, these findings cannot directly guide me to quickly and effectively select the optimal LMM for a given task without manual labels. Methods And Evaluation Criteria: Yes. Theoretical Claims: This paper does not involve the claim and proof of novel theories. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, I reviewed all sections of the supplementary material. Relation To Broader Scientific Literature: While the paper focuses on an intriguing and valuable problem and conducts extensive experimental analysis, its contribution and impact on the broader literature are not fully clear. Because the authors did not introduce novel methods and all implemented uncertainty-based methods are pre-existing. Essential References Not Discussed: No Other Strengths And Weaknesses: Other Strengths: - The paper additionally provides detailed correlation analysis of model performance. Other Comments Or Suggestions: - making the findings more concise and instructive. Questions For Authors: - What is the core finding of this paper? - How does this finding contribute to the LMM community? - How can the readers employ this finding to evaluate LMMs without label annotations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1. Authors did not propose new methods or metrics based on these findings for selecting the optimal LMM for a given task in an unlabeled scenario. **A**: We acknowledge that our work does not introduce new methods or metrics. However, it addresses a previously under-explored yet practically important problem: how to evaluate and select LMMs in unlabeled scenarios. We systematically analyze existing uncertainty quantification techniques across diverse models and datasets, offering insights into their effectiveness and failure cases. These indicators serve as practical tools for model selection without labels, which we believe is a critical step toward reliable LMM evaluation. Our findings also highlight the promise of uncertainty-based approaches, motivating future research into new metrics for this setting. > Q2. What is the core finding of this paper? **A**: Our key findings are: (1) model performance on one dataset does not reliably predict its ranking on another; (2) uncertainty in model predictions can be predictive of model rank, **with $NLL_{max}$ being the most accurate technique in general**; and (3) text prompt similarity better correlate with model performance across datasets than image feature similarity. We have clarified this in the revised paper, making these observations more crisply than in the initial submission. > Q3. How does this finding contribute to the LMM community? **A:** Our findings highlight three key contributions to the LMM community. First, the weak correlation of model performance across datasets underscores the need for diverse multimodal benchmarks to comprehensively assess LMM capabilities. Second, we find that text prompt similarity has a stronger influence on model performance correlations across datasets than image feature similarity. This finding emphasizes the importance of evaluating LMMs with diverse textual prompts rather than focusing solely on image variation. Third, the effectiveness of baseline uncertainty quantification methods suggests uncertainty estimation as a promising approach for unsupervised LMM ranking. > Q4. How can the readers employ this finding to evaluate LMMs without label annotations? **A:** Given a test dataset without labels and multiple LMMs to evaluate, our findings provide a practical framework for ranking and selecting the most suitable model. Specifically, there are two approaches: - For **models with access to internal statistics (*e.g*., logits)**, their $NLL_{max}$ scores can be directly used to rank model performance. - For **closed-source models**, where internal statistics are unavailable, $Sample_{BERT}^*$ serves as a viable alternative for effective performance ranking.
null
null
null
null
null
null
Identifying Causal Direction via Variational Bayesian Compression
Accept (spotlight poster)
Summary: This work proposes using Bayesian neural networks with variational inference (I will refer to this as variational BNNs) for bivariate causal discovery via the MDL principle (COMIC). The causal direction is determined by the direction that best trades off model complexity with model fit under the factorization $P = P_{\text{cause}} P_{\text{effect}\mid\text{cause}}$. COMIC parametrizes the conditional model as a variational BNN and takes advantage of the fact that minimizing the ELBO can be directly interpreted as description length, and thus variational inference in this case performs MDL. The authors adapt recent identifiability theory from a related paper that also uses Bayesian model selection in this setting (Dhir et al., 2024) to show identifiability. Experimental results show that COMIC outperforms most competitors on a large set of benchmark datasets. Claims And Evidence: To my judgement, the claims are well-supported. Methods And Evaluation Criteria: The proposed method builds on a large body of literature using the MDL principle for causal discovery. Given the connection between variational BNNs and MDL, and given recent work that Bayesian methods provably give sharper identifiability (Dhir et al., 2024), the method proposed is timely and makes a lot of sense. The benchmarks used are standard in this area and go beyond what was done in the previous work by Dhir et al. Theoretical Claims: The claim is that the variational BNN model with Gaussian likelihoods is not separable--compatible, which yields causal direction identifiability via the marginal likelihoods in the sense of Dhir et al., 2024. I found this to be a weak part of the paper. To my understanding, the model classes in Proposition 4.2 represent the class of Gaussian LSNMs, but 1) the proof is only given rigourously for ANMs, which are essentially identifiable in the non-Bayesian sense, which according to Dhir et al. implies identifiability in the Bayesian sense, and the same for LSNMs (Immer et al., 2023). Also, I did not check the proof closely (I am not that familiar with the idea of separable--compatible), but I couldn't actually tell from the proof why it was necessary to analyze the BNNs as GPs (which only holds in the infinite--width limit), it seems the Gaussianity of the observation noise would be enough (if the identifiability of ANMs with equal noise variance in general were not enough already!) Experimental Designs Or Analyses: I think the benchmarks/experimental design is pretty standard for bivariate causal discovery. Supplementary Material: Yes, but I focused on Appendix C., the causal identification section. Relation To Broader Scientific Literature: I think the relevance of bivariate causal discovery to the broader scientific literature is quite clear at this point, but it may be worthwhile to point out the relation to multivariate causal discovery (e.g., how to use this method directly or that one can first use bivariate causal discovery to orient edges in CPDAGs). Essential References Not Discussed: Appendix B.2 of Dhir et al., 2024 also connect the MDL principle to evidence maximization in Bayesian models. Other Strengths And Weaknesses: ### Strengths I mentioned above that I think this is generally a timely paper that spells out the known connection between variational BNNs and MDL to connect MDL-based causal discovery (older idea in this area) to Bayesian model selection-based causal discovery (a new idea with new theoretical guarantees). It seems that this connection may have been noted by previous work (Dhir et al., 2024), but it was brief and buried in the appendix---my judgement is that it is worthwhile that the current submission expands on this. The other strength is that the experimental results really appear to be SOTA on these common benchmarks, improving already impressive performance by GPLVM. ### Weaknesses As a methodology, the novelty is limited as it can be seen as a version of the GPLVM-based method (Dhir et al., 2024) replacing the marginal likelihood with the ELBOs (a lower bound), and replacing the GP with a BNN. The identifiability theory also closely follows the previous paper. Other Comments Or Suggestions: - I would have preferred more background in Section 4.4, e.g., it was not clear what "priors over anti-causal factorizations" were in Definition 4.1 without reading the reference Dhir et al., 2024. - Under Corollary 4.3, I would also include the qualifier that we need the marginal model to be correctly specified as $P_{\text{cause}} = N(0,1)$. Questions For Authors: For me, a major conceptual contribution of the paper is that it spells out this tight connection between the MDL principle and Bayesian inference. However, I am not an expert in these fields, so one main question that I have which will influence my score is: 1. Can you clarify to what extent your description of the connection between Bayesian causal discovery and the MDL principle extends that of Appendix B.2. of Dhir et al., 2024? Other questions (ordered in terms of importance): 2. I understand using the same marginal model for the cause variable to be fair to both directions (Eqn. 14), but would this not just prefer the "more Gaussian" variable for the cause? Is this a reasonable bias for real-world data? 3. Can you clarify the contribution of your identifiability proof compared to the known existing results of (non-Bayesian) identifiability of ANM/LSNM? I am not particularly picky about (3), but if the bias in (2) is prominent I think it would be better to probe the sensitivity of the method to this (e.g., what does it imply for non-Gaussian causes)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate Reviewer 3osf's comments to further elucidate and enhance our work. The following responses address the major points that you raised. **1. Relation to Appendix B.2 of [1]** Our work originates from the causal discovery via MDL using neural networks, with variational Bayesian codelength being the encoding method. This approach latter converges to the marginal likelihood-based method outlined in GPLVM [1]. Although the two-part MDL formulation is briefly discussed in [1, App. B.2], to the best of our knowledge, our work is the first one to *explicitly* integrate this variational MDL codelengths into the learning process of neural networks for the downstream task of causal discovery. This approach fundamentally differs from GPLVM, which originally adopts a non-parametric approach to model inference and only considers the formulation with respect to the model complexity in a post-hoc manner. While both methods employ the variational learning, GPLVM uses the KL divergence terms to infer the distributions over latent variables and inducing points of sparse GPs [1, App. F.1], rather than optimizing the conventional model parameters. In contrast, our approach *directly* incorporates the model's complexity into the learning process by considering the complexity via the distributions over the parameters. **2. $N(0,1)$ for the marginals** As encoding the conditionals is our main focus in this work, we choose $N(0,1)$ as an uninformative choice to encode the marginals. In fact, excepts for the AN, AN-s, LS, LS-s, MN-U, and SIM-G pairs, the remaining benchmarks consist of pairs with a non-Gaussian cause, including the real-world Tübingen dataset. We have run additional analysis on different choices for the marginals including $N(0, 1)$, $U(X_{min}, X_{max})$, and Variational Bayesian Gaussian mixture model (GMM) [2]. The uniform encoding negatively affects the performance on most datasets. GMMs provide advantages on SIM, CE-Multi, and CE-Net synthetic benchmarks, whereas the performance on SIM-G, SIM-ln, and CE-Cha benchmarks are noticeably reduced. Although there is a slight increase on the Tübingen prediction accuracy, the AUROC score is also substantially decreased. As a result, we believe the standard Gaussian remains a reasonable choice for our method. Additionally, using a more flexible model for encoding the marginals will also challenge the theoretical analysis of our models' separable-compatibility. Accuracy: | Marginal | AN | AN-s | LS | LS-s | MN-U | SIM | SIM-c | SIM-G | SIM-ln | CE-Multi | CE-Net | CE-Cha | Tübingen | |:-|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:| | Gaussian | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.90 | 0.93 | 0.99 | 1.00 | 0.89 | 0.97 | 0.90 | 0.87 | | Uniform | 1.00 | 0.93 | 1.00 | 1.00 | 1.00 | 0.88 | 0.84 | 0.83 | 0.99 | 0.94 | 0.94 | 0.80 | 0.86 | | GMM | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.97 | 0.94 | 0.92 | 0.94 | 0.98 | 1.00 | 0.80 | 0.89 | AUROC: | Marginal | AN | AN-s | LS | LS-s | MN-U | SIM | SIM-c | SIM-G | SIM-ln | CE-Multi | CE-Net | CE-Cha | Tübingen | |:-|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:| | Gaussian | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.97 | 0.99 | 1.00 | 1.00 | 0.98 | 1.00 | 0.97 | 0.97 | | Uniform | 1.00 | 0.98 | 1.00 | 1.00 | 1.00 | 0.96 | 0.91 | 0.93 | 1.00 | 0.99 | 0.98 | 0.89 | 0.95 | | GMM | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.98 | 0.98 | 0.91 | 1.00 | 0.99 | 0.81 | 0.87 | [1] Dhir, A., Power, S., & Van Der Wilk, M. (2024). Bivariate Causal Discovery using Bayesian Model Selection. In ICML. [2] Blei, D. M., & Jordan, M. I. (2006). Variational Inference for Dirichlet Process Mixtures. Bayesian Anal., 1(1). --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for the thoughtful response to my questions. Regarding 1), thank you. The important difference here seems to be that your proposed method is parametric which allows interpreting the ELBO as codelength, while GPLVM is non-parametric. This is a nice contribution and I think you should emphasize this in the paper. Regarding 2), thank you for the clarifications on the experiments, but I was looking for some justification of the $N(0,1)$ choice as "uninformative". Could you either confirm/deny that the methodology currently does indeed have an inductive bias of picking the "more Gaussian" cause? Note, I don't consider this as a critical flaw of the method (IGCI has the same problem). But if the bias exists I think the authors should clearly state the limitations in the paper. Contingent on discussion about the limitations being added I will tentatively raise my score to 4. --- Reply to Comment 1.1.1: Comment: We would like to thank Reviewer 3osf for acknowledging our rebuttal and suggestions for improving our paper. The Gaussian marginal assumption is made independently of the data and does not require any additional learning or parameter tuning after standardization, which is also a common practice in nonlinear bivariate causal discovery [1, 2]. We chose this distribution because it currently provides a practical and effective encoding for the marginals in the bivariate setting, as shown in the results in the previous response. However, we agree that choosing this distribution can create an inductive bias towards "more Gaussian" cause in our method, which need to be considered in more complex settings, such as multivariate structure learning or in the presence of hidden confounders. We will include a detailed discussion on this choice of distribution for the marginals and corresponding limitations in the updated version of our paper. [1] Mooij, J. M., Peters, J., Janzing, D., Zscheischler, J., & Schölkopf, B. (2016). Distinguishing cause from effect using observational data: methods and benchmarks. JMLR, 17(32). [2] Immer, A., Schultheiss, C., Vogt, J. E., Schölkopf, B., Bühlmann, P., & Marx, A. (2023). On the Identifiability and Estimation of Causal Location-Scale Noise Models. In ICML.
Summary: The paper introduces COMIC (Causal direction identification via Bayesian COMpression), a novel method for determining causal relationships between pairs of variables using variational Bayesian compression with neural networks. This approach improves upon existing compression-based methods by balancing model fitness and complexity through variational inference, demonstrating superior performance across multiple synthetic and real-world benchmarks. COMIC effectively models conditional distributions while considering both model accuracy and complexity, achieving state-of-the-art results. Claims And Evidence: I list several core claims of the paper 1. COMIC improves model fitness while promoting succinct codelengths: The method achieves superior performance on both synthetic and real-world benchmarks, outperforming related complexity-based and structural causal model regression-based approaches. 2. The variational Bayesian coding scheme effectively approximates the algorithmic complexity of neural networks Theoretical derivation shows that variational coding length can be decomposed into model fitness (ELBO) and complexity terms (KL divergence), with experimental validation demonstrating its effectiveness in practice. 3. COMIC is identifiable for causal direction: Based on Bayesian model selection theory, the authors prove that their models are non-separable-compatible, ensuring causal direction can be distinguished by marginal likelihood differences. Methods And Evaluation Criteria: The paper utilizes neural networks to learn conditional distributions, overcoming limitations of traditional regression methods. It employs variational inference to assess neural network complexity, explicitly capturing model complexity while avoiding high computational costs. The causality is determined by comparing total coding lengths (marginal and conditional) for different directions. Evaluation criteria: mainly on proportion of correctly identified causal directions (auruc). This criteria makes sense. Theoretical Claims: I find no obvious problems. Experimental Designs Or Analyses: The method is compared with a set of other ones that at least relates to the complexity based principles. Also, they conduct location-scale modeling vs. location-only modeling: The former shows superior performance on complex datasets (e.g., 97% Bi-AUROC on Tübingen vs. significant drops for the latter), and the impact of model complexity: Ignoring KL divergence (optimizing only likelihood) leads to performance degradation (e.g., accuracy drops from 90% to 83% on SIM dataset). Hidden layer width is also studied. This is a factor that relates to "how well" the complexity can be approximated by the neural statistical approaches. I thus thing the evaluations are comprehensive. Supplementary Material: Yes (see comments before). Relation To Broader Scientific Literature: This method contributes a practical approach for causal discovery. It is based on the statistical approximation of KM complexity, and the ICM principle. Although the princicple is already proposed for years, this new method provides a bridge connecting the uncomputable KM to a statistical approximations, which makes causal discovery from observational data tractable. Its perspective of approximation KM using neural nets with appropriate optimization method is, to me, novel. Essential References Not Discussed: The citations are appropriate. Other Strengths And Weaknesses: The method is practical. On the other hand, the approximation limits are not taht clear, although it does not affect the main conclusions of the paper. Other Comments Or Suggestions: It would be better to come out with a compehensive approach on how to select to priror of the Bayes method. Questions For Authors: 1. Prop 4.2: I do not think "modeled by single-hidden-layer neural" should appear in a math claim. This should be formulated by the capability of the neural nets or function to be more rigorous. 2. Theorem 3.1: This better appears in appendix since it is already proven. 3. Appendix C.3: concerning location-scale model, is there some specific Bayes hyperparameter configuations? 4. You may also consider putting several core experiments regarding the key aspects of the paper, say, to what extend can the Bayes method approximates the uncomputable complexity, in main paper, rather than on appendix. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer YARz for your novelty and favorable qualities of our work. Our responses below aim to rectify the issues that you mentioned. **1. Formulating the capability of the neural networks in Prop. 4.2** Thank you for your recommendation, we will specifically formalize the function of the "single-hidden-layer neural networks" as described in App. C.3 to make our proposition more mathematically rigorous. **2. Theorem 3.1** Since Theorem 3.1 is directly related to Eq. (5) and (7) and is the criterion for determining the causal direction, it was presented to clarify the origin of our causal scores and make our paper self-contained and easier to follow. **3. Hyperparameter configurations in location-scale models** Similar to the additive noise models, our location-scale models are also Bayesian neural networks with one hidden layer, whose details on hyperparameters are presented in App. B and C.3. The only difference between these types of models is that the location-scale ones have an additional output node to regress the log scale parameters for the Gaussian likelihood for enhancing the ability to model aleatoric uncertainty of our models. **4. Putting more experiment results in the main paper** Due to the current limit of space, we can only accommodate the most crucial results in the main paper. We will consider present directly or including more specific discussions and references to these key results in the next version of our manuscript. **5. Choice of prior for Bayesian neural networks** Our selection of Gaussian priors is influenced by previous works on evaluating neural network's complexity, especially [1-3]. As a result, we utilize Gaussian priors, which is a common choice, for variational Bayesian learning of neural networks. These studies have shown that variational learning with the Gaussian priors and hyperpriors on the variances yields adequate and consistent results with respect to the implicit prequential coding, which has been demonstrated to achieve good compression bounds in [2] and [3]. [1] Louizos, C., Ullrich, K., & Welling, M. (2017). Bayesian compression for deep learning. In NeurIPS. [2] Blier, L., & Ollivier, Y. (2018). The description length of deep learning models. In NeurIPS. [3] Voita, E., & Titov, I. (2020). Information-Theoretic Probing with Minimum Description Length. In EMNLP.
Summary: This work does not focus on traditional or recently popular functional class-based methods. Instead, it aims to study more general models, where the asymmetry in determining causal direction is assumed based on the Kolmogorov complexity. However, due to the incomputability of Kolmogorov complexity, the Minimum Description Length (MDL) principle serves as a practical proxy. Building on this, the authors explore Bayesian neural networks for causal discovery, leveraging the natural connection between MDL and likelihood. Identifiability analysis for causal direction is provided, and experimental results demonstrate strong performance on both synthetic and real-world benchmark datasets. Claims And Evidence: The proposed method aligns with the claims made in the paper. Methods And Evaluation Criteria: Empirical evaluations on both synthetic and real-world datasets validate the effectiveness of the proposed method. Theoretical Claims: I reviewed the theoretical aspects at a high level but did not rigorously verify the correctness Experimental Designs Or Analyses: The provided analyses are generally well-conducted. Supplementary Material: The appendix contains theoretical proofs, which I reviewed briefly. Relation To Broader Scientific Literature: The paper addresses an important problem, which has received limited attention in prior research. Essential References Not Discussed: The authors may discuss related works that address general non-linear relationships by leveraging the non-stationarity of observations, such as Monti et al., "Causal Discovery with General Non-Linear Relationships Using Non-Linear ICA", and Huang et al., "Causal Discovery from Heterogeneous/Nonstationary Data" (JMLR, 2020). Additionally, since Bayesian neural networks are mentioned, it would be helpful to include a discussion in the related work section on their relevance, particularly in the context of causal discovery and uncertainty quantification, e.g., Bayesian causal discovery mentioneds. Other Strengths And Weaknesses: It is expected that some methods consider non-parametric functions beyond functional class-based approaches for causal discovery, as the latter are often difficult to validate in real applications. While I agree with Postulates 1 and 2 regarding the causal asymmetry from the Kolmogorov complexity (though I am unsure how widely accepted they are in the community), I have the following concerns: 1) The gap between Kolmogorov complexity and MDL: It is well known that Kolmogorov complexity is incomputable, meaning there is no general algorithm to compute the exact value of Kolmogorov complexity for arbitrary data. In this case, how can an approximation method be used to provide a solution? In other words, if an approximation exists, does that mean the problem is constrained in a way that makes it well-defined and solvable? Furthermore, does such a constrained Kolmogorov complexity remain consistent with the ability to determine causal asymmetry, similar to standard Kolmogorov complexity? In other words, do the Postulates still hold under this constrained Kolmogorov complexity? 2) The gap between MDL and Bayesian Neural Networks: Given that this work adopts an MDL approach, it is reasonable to use a Bayesian framework for model selection. However, there has been significant recent progress in Bayesian neural networks, particularly regarding compact model selection, such as using sparsity-inductive priors. In other words, when designing a model, one may focus on how to learn a compact model that aligns with the prior knowledge inherent in the MDL framework. However, this work appears to introduce a relatively simple sample prior, a Gaussian distribution with zero mean and unit variance. How can this model effectively learn a sparse representation? Furthermore, how should the number of layers in the neural network be determined to maximize the ability to learn a sparse model from the data? Overall, the gap between Kolmogorov complexity and MDL raises concerns about whether causal asymmetry holds under the MDL approximation. Furthermore, the implementation of methods—such as using a relatively simple sample prior, specifically a Gaussian distribution with zero mean and unit variance—further deepens these concerns. I find this work compelling, particularly its exploration of a practical direction for causal discovery. However, the concerns raised above have led me to question the significance of the contribution. I would be happy to revise my rating if these issues are addressed effectively in a revised version of the work. Other Comments Or Suggestions: none Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer 3KpH for your valuable insights. We provide our responses below to clarify and resolve your concerns. **1. Kolmogorov complexity and MDL** There are two problems [1] when directly applying the Kolmogorov criterion in Eq. (4): 1. We do not have access to the true generating models $P_{X}$ and $P_{Y \mid X}$, and 2. The Kolmogorov complexity is not computable in practice. The former problem can be resolved by estimating the model through the joint complexity $K(x, P)$, which on expectation over $P(x)$ will yield $K(P) + H(P)$ up to an additive constant [2]. The latter problem requires an approximating codelength $L(x, P)$ that mirrors $K(x, P)$, commonly selected according to MDL or MML principles. Despite the performance achieved, the gap between the practical $L(x, P)$ and the theoretical $K(x, P)$ is still an open problem in MDL-based causal discovery methods [1, 3], which we have not been able to rectify at the moment. However, although our method begins from the Kolmogorov formulation, it is important to note that the identifiability of our models are studied from an orthogonal perspective of marginal likelihoods, which should not be theoretically impacted by the MDL approximation. **2. MDL of Bayesian Neural Networks** We would like to clarify that the prior over the parameters in our method are Gaussian distributions with zero means and learned variances (as described in App. B and C.3). We have also analyzed the normal-Jeffreys sparsity-inducing prior in Tab. 3, App. F.2, which indicated no significant improvements in performance. Additionally, Bayesian learning has already been shown to follow Occam's razor in model selection [4]. Hence, the sparsity-inducing prior is not necessary in our method. Regarding the number of layers, as our model with one hidden layer has already achieved adequate performance, we did not consider including additional layer(s). We have performed additional experiments with two hidden layers. However, there are no substantial differences in performance when the number of layer is increased. Moreover, with one more layer, the AUROC scores on some datasets such as SIM, SIM-c, and Tübingen are slightly decreased. Accuracy: | Hidden nodes | AN | AN-s | LS | LS-s | MN-U | SIM | SIM-c | SIM-G | SIM-ln | CE-Multi | CE-Net | CE-Cha | Tübingen | |:-|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:| | 20 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.90 | 0.93 | 0.99 | 1.00 | 0.89 | 0.97 | 0.90 | 0.87 | | 10, 5 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.87 | 0.92 | 0.99 | 1.00 | 0.91 | 0.96 | 0.91 | 0.88 | | 10, 10 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.89 | 0.93 | 0.98 | 1.00 | 0.90 | 0.96 | 0.91 | 0.87 | | 20, 5 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.88 | 0.92 | 0.99 | 1.00 | 0.90 | 0.97 | 0.89 | 0.87 | | 20, 10 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.87 | 0.91 | 0.99 | 1.00 | 0.91 | 0.97 | 0.90 | 0.89 | AUROC: | Hidden nodes | AN | AN-s | LS | LS-s | MN-U | SIM | SIM-c | SIM-G | SIM-ln | CE-Multi | CE-Net | CE-Cha | Tübingen | |:-|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:| | 20 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.97 | 0.99 | 1.00 | 1.00 | 0.98 | 1.00 | 0.97 | 0.97 | | 10, 5 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.96 | 0.99 | 1.00 | 1.00 | 0.98 | 1.00 | 0.97 | 0.96 | | 10, 10 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.97 | 0.99 | 1.00 | 1.00 | 0.98 | 1.00 | 0.98 | 0.96 | | 20, 5 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.96 | 0.98 | 1.00 | 1.00 | 0.98 | 1.00 | 0.97 | 0.96 | | 20, 10 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.96 | 0.98 | 1.00 | 1.00 | 0.98 | 1.00 | 0.97 | 0.96 | **3. References to be discussed** Thank you for your recommendations. We will include additional related works in multivariate causal discovery in our multivariate discussion in App. G. As the aim of our work is the evaluation the codelengths/complexity of neural networks rather than estimating Bayesian neural networks (BNNs), we did not include a literature review on the BNNs, which we will examine in our future work. However, we do discuss related choices for computing the complexity of neural networks in Sec. 4.1. [1] Kaltenpoth, D., & Vreeken, J. (2023). Causal discovery with hidden confounders using the algorithmic Markov condition. In UAI. [2] Marx, A., & Vreeken, J. (2022). Formally Justifying MDL-based Inference of Cause and Effect. In AAAI ITCI'22 Workshop. [3] Marx, A., & Vreeken, J. (2019). Telling cause from effect by local and global regression. Knowl. Inf. Syst., 60. [4] Rasmussen, C., & Ghahramani, Z. (2000). Occam's razor. In NIPS. --- Rebuttal Comment 1.1: Comment: Many thanks for your rebuttal, including the additional experimental results. Since Kolmogorov complexity is not computable, one must rely on an approximating proxy. While this makes the problem solvable in practice, it also means that the problem solved by approximation methods—referred to as modified Kolmogorov complexity—may differ from the one based on the exact (original) Kolmogorov complexity. Notably, the gap between the modified and original Kolmogorov complexity is non-trivial to quantify. This raises a critical question: how does this gap impact the identification of causal models? Specifically, are the causal models inferred using modified Kolmogorov complexity consistent with those derived from the original Kolmogorov complexity? This issue is key—if the two are not consistent, the identifiability results presented in this paper may be problematic. At the same time, I recognize that this concern is non-trivial. Consequently, I have rated the paper a 3, but I do expect the authors to provide further insights on this matter. --- Reply to Comment 1.1.1: Comment: We appreciate Reviewer 3KpH's acknowledgement of our rebuttal. In this response, we present some further insights into the gap between the Kolmogorov complexity and the approximated MDL codelengths. As mentioned in our previous response, the gap between Kolmogorov complexity and MDL is still an open problem and has not yet been rigorously studied in most current literature on complexity-based causal discovery. The only variant of MDL that can be guaranteed to compute the Kolmogorov complexity is the Ideal MDL [1-4]. In this variant, the class of models is chosen with respect to a Solomonoff prior [1-5], $-\log\pi(\cdot)\propto K(\cdot)$, which is a universal prior over all Turing machines/distributions that generate the string and halt. Because this prior itself is also defined based on the Kolmogorov complexity, it is also not computable. However, we can narrow this gap with a sufficiently broad class of models, while maintaining the independence between these models and their conditioning variable(s) by design, and a large enough number of samples [5, 6]. In this case, the approximated codelengths can be expected to converge to Kolmogorov complexity and the inequality to hold for the approximated MDL codelengths with constrained classes of models [5, 6]. Particularly, in this work, our Bayesian class of models has already been intentionally selected to be flexible and identifiable from the marginal likelihood perspective. As a result, we expect that the inequality will be consistent both for our approximated complexities at the statistical level and for the Kolmogorov complexities at a higher level of machine design. Hence, although this gap is non-trivial to study, we believe that it should not affect the identifiability results presented in our paper, which are evaluated independently of the Kolmogorov complexity-based postulates. Nevertheless, other models will still require careful analysis of this gap, and this problem will surely be one crucial perspective that we will consider in future work. We will incorporate these discussions on this gap between the Kolmogorov complexity and MDL in the next version of our paper to clarify the justifications behind our approximation method. [1] Grünwald, P. D. (2007). The minimum description length principle. MIT Press. [2] Li, M., & Vitányi, P. (2019). An Introduction to Kolmogorov Complexity and Its Applications (4th ed.). Springer. [3] Solomonoff, R. J. (1964). A formal theory of inductive inference. Part I. Information and Control, 7(1). [4] Solomonoff, R. J. (1964). A formal theory of inductive inference. Part II. Information and Control, 7(2). [5] Kaltenpoth, D. (2024). Don’t Confound Yourself: Causality from Biased Data [Doctoral dissertation, Saarland University]. [6] Marx, A., & Vreeken, J. (2022). Formally Justifying MDL-based Inference of Cause and Effect. In AAAI ITCI'22 Workshop.
Summary: The paper proposes a new method, called ‘COMIC’, for bivariate causal direction identification under the causal sufficiency assumption. It is based on the familiar ICM + MDL principles, utilising a variational Bayesian learning of the complexity of neural network approximations to the marginal/conditionals implied the two different decompositions X-> Y vs. Y-> X. It is intended to offer the same flexibility in model fitness obtained via GPs, but at a significantly lower computational cost / better scalability. The end result is evaluated on a range of benchmark data sets against a range of alternatives, and found to provide near universal improvement over existing methods on nearly all data sets. Claims And Evidence: Claims are properly supported by theoretical derivations and experimental results. Methods And Evaluation Criteria: Well-known, challenging problem. New approach that seems both efficient and effective, with lots of potential for further extensions. Main criticism is that the paper keeps following the pervasive simplifying assumption of ‘no confounding’. Yes, everyone does it, and it makes life so much easier for writing papers, but it is unrealistic and unhelpful in real-world contexts, making the resulting methods, however fast&fancy, often largely useless in practice … maybe something to tackle next? Theoretical Claims: Yes, at least to some degree (see Q at 12) Experimental Designs Or Analyses: Extensive evaluation (incl. Appendix F). Only thing surprisingly missing from the experimental results is an indication of performance / scalability of COMIC vs. e.g. GPLVM, as that aspect was one of the main motivations behind the current approach. Supplementary Material: Yes, parts in detail, parts skimmed. Relation To Broader Scientific Literature: Good: all relevant bases seem to be covered. Essential References Not Discussed: None that come to mind. Other Strengths And Weaknesses: Well written paper with interesting and meaningful contribution to a well-established problem. Fairly technical, but readable. Would have liked to get some more insight/intuition for the resulting computational complexity and stability of the result. But overall solid paper, so for now: clear accept. Other Comments Or Suggestions: No. Questions For Authors: Perhaps I am misunderstanding part of Prop.4.2, but it reads as a claim about your (neural network) causal model approximations, stating that they are not separable-compatible, and hence capable of distinguishing between X -> Y vs. Y -> X in the large sample limit, without reference to the actual underlying causal model? But if the underlying model responsible for generating the data is e.g. linear Gaussian (so separable-compatible), then this should be impossible, right? Or is this implicitly excluded in the assumption that the underlying model itself must be a neural network with the stated parameters? To be clear: I would understand the neural network model as a universal approximation to the real underlying model, and so a Prop4.2/Cor4.3 along the form ‘If the underlying model for X->Y is not distributionally equivalent to a model with Y->X, then our neural network approximation would yield different Bayesian codelengths for the two alternatives, i.e. they would not be separable-compatible.’ Could you clarify what exactly is meant here? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank Reviewer NJ5G for recognizing the positive aspects of our work. We expect the following rebuttal will address your concerns. **1. Underlying model assumptions** We summarize three key definitions related to the identifiability of our Bayesian causal models (BCMs), introduced in [1], as follows: 1. *Distribution-equivalence:* causal model selection via **maximum** likelihood will not be able to identify the causal direction; 2. *Bayesian distribution-equivalence:* Bayesian causal model selection via **marginal** likelihood will not be able to identify the causal direction, which is a stricter condition than distribution-equivalence; 3. *Separable-compatibility:* the necessary condition for two distribution-equivalent BCMs being Bayesian distribution-equivalent. If two BCMs are distribution-equivalent, as long as they are not separable-compatible, they will not be Bayesian distribution-equivalent [1]. Similar to previous methods based on functional causal models (FCMs), the BCMs as in GPLVM [1] and our work are assumed to be the *underlying generating processes* of the data. With non-separable-compatible BCMs, given enough data, we can estimate the underlying BCMs and identify the causal direction with the marginal likelihoods. The benefit of this approach is that BCMs can be designed to be flexible enough (while keeping non-separable-compatibility) to cover a broad spectrum of generating models, such as GPLVM [1] or Bayesian neural networks in our work. Therefore, your example of normalized linear Gaussian models do not lie into the scope of our BCMs which assume that the effect is generated via nonlinear Bayesian neural networks. **2. Performance over scalability results** Since we use the official implementations of the baselines, we do not believe a direct comparison of running time would be fair. Instead, we have opted to evaluate the computational complexity of our forward pass in App. B. However, for a rough comparison, on the AN dataset, our work only took around 3 minutes on a CPU configuration, whereas GPLVM required on average about 2 days 11 hours on 10 GPUs. **3. Causal sufficiency assumption** Thank you for your suggestion. Since our performance on the SIM-c (with confounders) is promising, relaxing the causal sufficiency assumption will be a potential direction of study for our future work. [1] Dhir, A., Power, S., & Van Der Wilk, M. (2024). Bivariate Causal Discovery using Bayesian Model Selection. In ICML.
null
null
null
null
null
null
In-Context Denoising with One-Layer Transformers: Connections between Attention and Associative Memory Retrieval
Accept (oral)
Summary: ## Update After Rebuttal ## I maintain my score. Please see below for my reasonings in my response. Overall, I think it is a great paper despite some of my comments. ----- This work introduces a concept, in-context denoising, which is a task that refines the connection between attention-based architectures and Dense Associative Memory networks (DAMs). Using a Bayesian framework, based on the minimization problem of $\mathbb{E}_{X, \tilde{X}} [X - f_\theta(\tilde{X})]$, where $\tilde{X}$ is the perturbed version of $X$, this paper illustrates that certain context-dependent denoising problems can be solved with a single-layer transformer model. Furthermore, it also demonstrates that a context-aware DAM one-step update yields better performance than a non-context-aware DAM update. Altogether, the work further solidifies the connection between attention and associative memory (AM), while illustrating the relevancy of AM for in-context learning (ICL). Claims And Evidence: ### Claim The paper has two fundamental claims: (1) They claim that there are certain denoising tasks that a single-layer transformer can effectively solve. (2) Once trained, the attention layer effectively performs an operation that is a single gradient descent step on a context-aware DAM's energy landscape. This update is better than an extract retrieval of a token or spurious sample. ### Evidence (1) To support the first claim, the presents theoretical results on three elementary denoising tasks where the data comes from (a) linear manifolds, (b) non-linear manifolds and (c) Gaussian mixtures or clusters, see Figure 1. Specifically, it presents the Bayesian optimal function $f_\text{opt}$ for each of these cases. For each case of data, the work makes an argument that the $f_\text{opt}$ of each case is equivalent to the attention mechanism of different activation (e.g., linear or softmax attention). To further support their statement on $f_\text{opt}$ of each case being equivalent to the attention mechanism. They perform experiments illustrated in Figure (3), which highlights the MSE loss of the estimated $\hat{x}$ and the expected loss obtained from $f_\text{opt}$. This overall demonstrates that the 1-layer transformer model will converge to the expected loss obtained from the Bayesian optimal predictor. Note, this particular experiment is pretty convincing. (2) The connection between DAM and attention is rather straightforward since [Ramsaeur et al. (2020)](https://arxiv.org/abs/2008.02217) has already established this initially. To further establish this connection, this work connects the DAM energy to ICL where the context sequence $X_{1:L}$ serves as the memory in which the noised prompt $\hat{x}$ uses for its alignment with the system. However, this connection is quite brief and obvious, since the formulation is already done [Ramsaeur et al. (2020)](https://arxiv.org/abs/2008.02217). To support their second claim, they perform a simple experiment detailed in Figure 5. In this experiment, using the DAM energy equation, which involves the term $\frac{1}{2 \alpha} \lVert s \rVert^2$, they contrast the update rule $s(t + 1) = (1 - \frac{\gamma}{\alpha}) s(t) + \gamma \, X_{1:L} \, \text{softmax}(\beta X_{1:L}^\top s(t))$. Specifically, they contrast when the update rule is equivalent to the attention mechanism, i.e., $\gamma = \alpha$, and when $\gamma \neq \alpha$. As already shown in [Ramsaeur et al. (2020)](https://arxiv.org/abs/2008.02217), when $\gamma = \alpha$, the retrieval is fast and accurate and shown here by this work, when $\gamma \neq \alpha$, the retrieval diverges away from the target point because the dynamics become query-independent. ## Strength (1) The theoretical results regarding $f_\text{opt}$ for the three data cases presented in this paper are concrete and easy to understand. The paper also presents the expected optimal weight for the single-layer transformer once it is trained. This weight is simply a scaled identity matrix. (2) Another interesting result presented in the paper is the effect of increasing context length on the convergence to the expected bound based on the Bayesian optimal predictor. With sufficient (or increasing) context length, it seems that the convergence rate is increased. (3) This paper provides an interesting in the dynamics of transformer regarding denoising problem and further connects such dynamics to DAM. ## Weaknesses (1) The analyses are done on a single-layer transformer system. It is unclear on how normal models, i.e., those with a large depth, behaves. Specifically, how does their weights vary and what effects do they play in performing multiple iterative steps? (2) The connection to DAM feels a bit minimal, which is somewhat understandable since much of the results are focused on the theoretical section involving the three general denoising cases that a simple transformer can solve. (3) Although the one-step update in DAM (i.e., when $\gamma = \alpha$) is faster to converge or accurate in terms for the retrieval setting, it's not exactly better depending on the task. For example, in terms of denoising to generate new data, the update with $\gamma \neq \alpha$ could be better, since you don't need exact retrieval in this case. Methods And Evaluation Criteria: (1) The methods or experimentation which evaluated claim (1) make a lot of sense. The authors trained a single-layer transformer and analyze whether its convergence is bounded by their theoretical value. I don't see faults on their experimentation. (2) The experimentation for claim (2) is straightforward and quite simple. Once again, there are no faults in their approach here. Theoretical Claims: I (the reviewer) primarily paid much of my attention in sections (2 and 3) since they contained the theoretical proof and results. I don't see any problems with the proofs nor the claims. My general issues are laid out in the weaknesses above. Experimental Designs Or Analyses: As mentioned prior, I see no flaws in the experimental designs and their analyses. My general issues are laid out in the weaknesses above. Supplementary Material: I paid much attention to sections (A to C) to help understand sections (2 and 3) of the main text. I also paid attention to section G which demonstrates that Linear Attention is simply the Spherical Hopfield model. Relation To Broader Scientific Literature: In general, we now know that the attention mechanism from [Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762) is connected to the Modern Hopfield Model, developed by [Krotov and Hopfield (2016)](https://arxiv.org/abs/1606.01164) and [Demircigil et al. (2017)](https://arxiv.org/abs/1702.01929), shown by [Ramsaeur et al. (2020)](https://arxiv.org/abs/2008.02217). Moreover, we know how powerful the applications developed with the Transformer architecture can be, e.g., ViTs [Dosovitskiy et al.](https://arxiv.org/abs/2010.11929). ***But we still don't know why the transformer block works so well and what exactly does the mechanism of the block means***. With the perspective of DAM, we know the attention mechanism is simply a gradient step on the DAM's energy landscape as shown by [Ramsaeur et al. (2020)](https://arxiv.org/abs/2008.02217) and further reinforced and explained in this work. With simple data cases, this work demonstrates that a trained single-layer attention model will converge to become a gradient step on a Hopfield model's energy landscape. This is quite significant. Essential References Not Discussed: A nice reference that could be mentioned is the Energy Transformer (or [ET](https://arxiv.org/abs/2302.07253)). It is an extension of [Krotov and Hopfield (2021)](https://arxiv.org/abs/2008.06996) and [Ramsaeur et al. (2020)](https://arxiv.org/abs/2008.02217) which converts the non-dynamical Transformer block into a dynamical (Energy) Transformer block. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: (1) I think there are a lot of results presented in the paper, which is a strength. However, these results can be quite complex for some readers. I think leading the presentation of the paper with the narrative of DAM can help keep the interests of the readers. (2) In section (C.2) of Appendix C, there is an incomplete sentence at the end of the sub-section: "The clustering case" Questions For Authors: See my weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment and the high score (4/5) provided. We concur with their observation that, collectively, we still don't know why the transformer block works so well. This shared curiosity drives our work to build theoretical foundations for understanding the mechanisms underlying the success and flexibility of attention mechanisms. We are thus deeply grateful for their positive feedback in this direction. **Re: Claims and Evidence** We respectfully disagree with point (2) in the evidence section. While our paper builds on the connection established by Ramsauer et al., our contribution goes significantly beyond this foundation. Reading Ramsauer et al., one gets the impression that the question is memory retrieval. In our formulation, the task could be quite distinct from memory retrieval. We set up an in-context learning task, whose one-layer transformer-based solution is not guaranteed to look like a one-step lowering of some simple energy function. Quite remarkably, it does so in the special cases we study. **Re: Essential References** We thank the reviewer for highlighting the Energy Transformer (ET) paper and regret missing the citation. It is distinct but complementary effort that will be discussed in our revised manuscript to better situate our contribution within the broader literature. **Re: Strengths and Weaknesses** Regarding the weaknesses discussed: - (1) We agree that analyzing deeper transformer systems is an important direction. Our focus on single-layer transformers was deliberate to establish clear connections in this foundational case. The question of how weights in deeper networks might implement multiple iterative steps on energy landscapes represents an exciting avenue for future research. - (2) This paper was written as an invitation to those working in mainstream transformer theory towards the very interesting activity in the DAM community. The in-context denoising task was meant to be a bridge between these two worlds. We will include further discussion on connections to DAM in the final version, if accepted, such as a detailed comparison with the Energy Transformer and a discussion of how our approach relates to tasks of interest in the DAM community, which we initially omitted due to space limitations. - (3) Indeed, in specific retrieval tasks, one might very well benefit from multiple steps or varied step sizes. In our specialized denoising cases, the 'step size' gets fixed during training, where the transformer is trying to approximate the Bayes optimal answer. This highlights an interesting distinction between retrieval tasks and denoising tasks that could inform broader transformer design. **Re: Comments and Suggestions** - (1) Thanks for the kind words! We are certainly open to refocusing the introduction to keep the interest of readers, but would prefer to lead the results section with in-context learning and the Bayesian setting + derivations, as we feel it sets the stage well for the emergence of the associative memory behavior in the optimal/trained networks. - (2) Thanks much for catching the incomplete sentence at the end of subsection C.2 in the Appendix. It was meant to be part of the header for Section C.3. We will correct this in the revised version. -- We thank the reviewer again for their positive feedback and valuable suggestions. We believe that addressing these points will further strengthen the paper while maintaining its core contributions to solidifying the connection between attention mechanisms and associative memory in a minimal in-context learning setting. --- Rebuttal Comment 1.1: Comment: To the authors, Thank you for your response. Overall, I enjoy reading your paper. Although, my opinion could be wrong, I still feel the connection to DAM is brief. However, I believe the results presented in the paper are adequate and tell a nice story about ICL and DAM. Another thing that could be mentioned regarding Eqs. (15 and 16) is that --- if the update rule only involves the softmax term (i.e., $\gamma = \alpha$), we get a very nice update rule for memory retrieval but when $\gamma \neq \alpha$, this could actually be perfect for generation or synthesizing new minima from the stored memories (if that makes sense). A few more suggestions from me are: (1) fix line (642-643) (2) fix line (674-675) (3) perhaps move Fig. (6) to the main text given the additional page for camera-version (4) increase the size of Fig. 5 --- the font size of the legend/labels seems too small to me (5) Fig. 4 could be stretch out a bit more, I believe if you crop out the white space of that figure, Latex should be able to make it larger. Overall, I would like to maintain my score. Best of luck to the authors. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their positive feedback and helpful suggestions towards improving the work. We appreciate the insightful idea about the update rule when $\gamma \neq \alpha$ potentially being useful for generation, as well as the formatting recommendations. We will incorporate these points in our camera-ready version if accepted.
Summary: This paper studies the in-context unsupervised denoising of data points in transformers. They show that single layer transformers with a single attention head are sufficient to learn this task and that standard training procedures from random initialization can converge to Bayes optimal solutions. Lastly, they provide connections between the learned transformer models on this task and Dense Associative Memory models, a modern variant of Hopfield networks which have high storage capacity. Claims And Evidence: The authors provide ample theoretical and empirical support of their claims. They provide derivations of all of the Bayes optimal predictions $\mathbb{E}[X|\tilde{X}]$ for the various settings they consider, they provide constructions which show that transformers can implement these solutions, and they also provide many experiments where they train transformers on these tasks and show the ability of the model to achieve Bayes optimal error. Methods And Evaluation Criteria: Yes, the paper primarily focuses on synthetic data but this is the setting of interest for their theory. Theoretical Claims: Yes, I checked the theoretical claims, most of which are easy to verify by following the derivations in the Appendix. Experimental Designs Or Analyses: I did not check the experimental design in great detail, but I trust the simulations. Supplementary Material: Yes, I reviewed most of the supplementary material. Relation To Broader Scientific Literature: This paper studies an important problem of in-context capabilities of transformers, specifically in the context of denoising. They show that transformers can theoretically and empirically learn to denoise several distributions. In addition, they also provide interesting connections to the associative memory literature. Essential References Not Discussed: Possibly of interest is this work https://arxiv.org/abs/2310.09753 which finds that adding additional identity matrix to the product $W_k^\top W_Q$ can lead to improved performance on template "reasoning" tasks. This reminds me somewhat of the fact that you argue that $W_k^\top W_Q$ should be close to a scaled identity in this work. Other Strengths And Weaknesses: This paper studies an interesting problem, gives precise and testable predictions and verifies them. One weakness at the present moment is the limited scope of data distributions that are considered, but I think that the present contribution is already good. Other Comments Or Suggestions: N/A Questions For Authors: 1. The authors provide mainly asymptotic results as context length diverges $L \to \infty$. Do they have any sense of the convergence rate in $L$ as a function of the dimension of the space? Specifically how large does $L$ need to be for the finite sample sums in equation 11 or 13 need to be to converge to the correct estimate? 2. The construction concluded that $W_k^\top W_q$ should be a scaled version of the identity, but in experiments it doesn't have to be exactly this. The authors then provide a short discussion on why this could be, specifically that there are different values of $W_k^\top W_q$ and $W_v$ that implement equivalent attention operations. Have the authors checked if in fact the learned implementation matches their theoretical expectations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful review and their high evaluation of our paper (4/5). We agree that this is an important problem and are thus grateful for their positive assessment of our theoretical findings and empirical validations. **Re: Weaknesses** The reviewer notes that our paper primarily focuses on synthetic data. This was a deliberate design choice to establish clear theoretical connections between transformer attention and Dense Associative Memory (DAM) networks. By working with well-defined synthetic distributions, we could derive Bayes-optimal predictors as theoretical baselines against which to evaluate our transformer models. We agree that extending these insights to more complex, real-world data distributions represents an important direction for future work. The current paper focuses on establishing foundational theoretical results and validating them empirically in controlled settings where optimality can be rigorously established. This approach allows us to identify the key mechanisms by which transformer attention implements optimal denoising, providing insights that can inform future work on more complex data distributions. **Re: References** We thank the reviewer for pointing us to the work of Boix-Adsera et al. This paper's finding that adding an identity matrix to the product $W_K^T W_Q$ can improve performance on template reasoning tasks (Observation 1.2) resonates with our work. We will include this reference in our revision (perhaps alongside Trockman & Kolter (2023) in the discussion) to highlight this potential connection and suggest avenues for future investigation. **Re: Specific Questions** - **(Q1)** Regarding the convergence rate in $L$ as a function of dimension: we appreciate this question and will consider including a more detailed analysis of finite-sample effects in the Appendix. Briefly, our analysis primarily focused on the asymptotic behavior as $L \rightarrow \infty$ (using the strong law of large numbers, which just requires the mean to exist). However, in the linear example, our tokens are Gaussian and in the two non-linear cases they are bounded. Intuitively, we expect error $O(\sqrt\frac{1}{L})$. In fact, we can give precise probabilistic bounds of the form that the difference between the empirical sum for the ideal weights depart from the expectation by less than $C(\tilde x)\sqrt{\frac{f(d,\ln\frac{1}{\delta})}{L}}$ is greater than $1-\delta$. The function $C$ of the query vector and the function $f$ depends on the problem. Interestingly, this indicates $d$ depends on the dimension spanned by the tokens (not the ambient dimension $n$). Figure 4(a) provides some empirical evidence for this relationship, showing how performance improves with increasing context length. - **(Q2)** Regarding the learned implementation matching theoretical expectations: Yes, we have verified this carefully. Figure 3(b) shows the final learned attention weights, which closely resemble scaled identity matrices as predicted by our theory. We also analyzed the loss landscape with respect to scaling factors $\alpha$ and $\beta$, confirming that the learned weights indeed lie in the predicted valley of the loss landscape. The minor deviations from the exact theoretical values can be attributed to finite context effects and the stochastic nature of the training process, but do not contradict our theoretical findings. -- We thank the reviewer again for their thoughtful review and insightful questions. Their suggestions have helped us identify important points to clarify and highlight in the final version of our paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed comments and maintain my positive evaluation. --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for their positive feedback. We will try our best to incorporate their suggestions in the camera-ready version if accepted.
Summary: The paper explores a link between one-layer transformer attention and associative memory retrieval. It frames an in-context denoising setting with three synthetic data scenarios: linear subspaces, points on a sphere, and Gaussian mixture clusters. It derives Bayes-optimal predictors in each setting and shows that a single attention update approximates these optimal solutions under small-scale assumptions. It interprets the one-layer transformer as one-step gradient descent on a Hopfield-like energy. Iterative updates converge to stored attractors and degrade denoising performance. The paper supports these claims with synthetic experiments, where trained transformers match the predicted optimal performance. It concludes that a one-step attention update implements Bayes-optimal denoising in specific synthetic contexts, connecting transformer attention with associative memory beyond exact pattern retrieval. ## update after rebuttal Claims And Evidence: * **Claim: Single-step attention approximates Bayes-optimal denoisers in specific synthetic settings** * Evidence: The authors derive exact optimal predictors under Gaussian mixtures, linear subspaces, and spherical assumptions. They show near-optimal mean squared error in experiments. * **Claim: One-layer transformers match a one-step gradient update on a Hopfield-like energy** * Evidence: The paper presents an energy formulation and demonstrates that learned attention weights align with the gradient of that energy. Experimental training runs confirm that a single attention update outperforms multiple updates in denoising. Methods And Evaluation Criteria: - Theoretical Claims: All key claims are backed by formal derivations (with detailed proofs in the appendices). The math seems sound. For example, Proposition 2–4 give closed-form optimal denoisers for each case, and Proposition 3.2 analyzes the softmax attention in a small-argument regime. These ensure the theoretical claims hold under certain limits. Throughout, assumptions (e.g. isotropic distributions, large ambient dimension, small noise limits) are clearly stated, and the derivations use standard Bayesian inference and asymptotic analysis, which appear logically valid. Experimental Designs Or Analyses: **Conclusion:** The experimental design is well aligned with the theory. The analyses are correct for the stated goals. The limited scope leaves open questions about broader generalization. --- **Synthetic Task Design:** The authors focus on three artificial data setups: linear subspaces, points on a sphere, and Gaussian mixture clusters. They construct these tasks to mirror the paper’s theoretical assumptions (e.g., isotropy, symmetry). This design is consistent with the paper’s goal of checking Bayes-optimality in simplified contexts. **Model Configuration:** They use one-layer transformers with either linear or softmax attention. They match each architecture to a theoretical predictor. The design supports the core claim that a single-step attention update aligns with the derived solutions. **Training Procedure:** They train these models on the synthetic tasks and measure mean squared error, checking how close each model’s performance is to the analytical Bayes-optimal denoiser. This procedure is valid. They compare different initializations and sometimes vary noise scales to confirm robustness. **Comparisons:** They contrast linear attention versus softmax attention. They show that both approaches achieve near-Bayes-optimal performance. They analyze how the learned attention weights resemble the predicted identity-like matrix in each scenario. **Limitations:** They do not test real-world data or more complex tasks. The synthetic setups are narrow in scope. I guess this is fine for a theoretical paper. Supplementary Material: yes. i skimmed through the appendix (mainly proofs) and code. The proofs look formal but I didn't check them line-by-line. The code seems ok, but I also did not run it by myself. I decide to trust the authors here :) Relation To Broader Scientific Literature: ### **Connection to Modern Hopfield Networks:** The paper builds on the established link between transformer attention and modern Hopfield retrieval (Ramsauer et al., 2021). It interprets a single-layer transformer as a one-step gradient update on a Hopfield-like energy. Similar efforts in recent literature but not mentioned in this paper include - [Santos et al. ICML 2024] Sparse and Structured Hopfield Networks https://arxiv.org/abs/2402.13725 - [Wu et al., ICML 2024] Uniform Memory Retrieval with Larger Capacity for Modern Hopfield Models https://arxiv.org/abs/2404.03827 - [Wu et al., ICLR 2024] STanHop: Sparse Tandem Hopfield Model for Memory-Enhanced Time Series Prediction https://arxiv.org/abs/2312.17346 - [Hu et al., NeurIPS 2023] On Sparse Modern Hopfield Model https://arxiv.org/abs/2309.12673 The paper’s one-step approach complements these ideas but does not discuss them. --- ### **Extension from Exact Retrieval to Denoising:** Prior Hopfield models often retrieve exact stored patterns, e.g., [Santos et al. ICML 2024]. The paper shifts toward Bayes-optimal denoising, which involves partial or mixed retrieval rather than convergence to a single pattern. --- ### **Broader Transformers and In-Context Learning:** The paper adds to literature suggesting transformers perform implicit gradient updates in-context (for examples, see below). Prior works demonstrate in-context meta-learning [Dai et al., ACL 2023] or approximate Bayesian inference [Xie et al., ICLR 2022] , and this paper contributes a denoising viewpoint. - [Oswald et al., ICML 2023] Transformers learn in-context by gradient descent https://arxiv.org/abs/2212.07677 - [Dai et al., ACL 2023] Why Can GPT Learn In-Context? Language Models Implicitly Perform Gradient Descent as Meta-Optimizers https://arxiv.org/abs/2212.10559 - [Bai et al., NeurIPS 2023] Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection https://arxiv.org/abs/2306.04637 - [Ahn et al., NeurIPS 2023] Transformers learn to implement preconditioned gradient descent for in-context learning https://arxiv.org/abs/2306.00297 - [Xie et al., ICLR 2022] An Explanation of In-context Learning as Implicit Bayesian Inference https://arxiv.org/abs/2111.02080 Essential References Not Discussed: - Other Strengths And Weaknesses: ## Strengths - **Originality:** Introduces in-context denoising as a new setting, explicitly linking transformer attention to Hopfield memory. Extends previous views that only consider retrieval tasks. - **Significance:** Clearly demonstrates the significance of transformers performing optimal denoising updates beyond single-pattern retrieval, providing a formal connection between in-context attention and optimal memory retrieval. - **Clarity:** Clearly explains the proposed approach and its advantages, with logical connections between theoretical analysis and experiments. ## Weaknesses - **Positioning and Contribution:** The main concern is the unclear positioning of the paper’s contribution. Transformers implicitly simulate gradient-based updates in-context, as established by existing literature. Prior studies already demonstrate this perspective, though typically limited to single-token retrieval. After some reflections, I suggest positioning this paper as extending the known "transformers-as-implicit-gradients" concept. Through 3 very specialized examples, this paper analyzes how attention interpolates among multiple context tokens to achieve a Bayes-optimal denoiser, rather than merely retrieving or copying one pattern. Specifically, this paper goes beyond prior studies by: - Performs optimization blending multiple context tokens, achieving performance superior to single-token retrieval. - Demonstrates Bayes-optimality explicitly in simplified synthetic tasks. - Clarifies how mixed retrieval outperforms exact retrieval, providing rigor to the implicit-gradient viewpoint in multi-token scenarios. This adds a conceptual and theoretical layer to the implicit GD story: it clarifies why partial or “mixed” retrieval can be optimal, whereas previous examples focused on simpler single-token or exact retrieval scenarios. - **Literature Completeness:** Several relevant references about the association between attention mechanisms and one-step associative memory retrieval are missing. Including these references improves the contextual completeness of the paper. - **Equation referencing:** Some equations have numbers without explicit references in the main text. Equations that are not explicitly cited should not have numbers. ## Why not higher score? - The paper primarily refines known connections between transformers and Hopfield networks. The core novelty is modest, building on established frameworks rather than providing fundamental insights. - The theoretical analysis and experiments focus exclusively on simplified synthetic scenarios, limiting generalization to more realistic or practical applications. - The limited experimental and theoretical scope reduces the broader impact. Expanding experiments or theoretical analysis beyond synthetic settings significantly strengthens this work. Other Comments Or Suggestions: see above Questions For Authors: 1. whether the one-step Bayes-optimal property holds when data are more complex, structured, or high-dimensional (e.g., text or images)? for example, how read the corresponding attention be like? is it possible to be sth like really attention heads in practice? 2. What happens if we are allowed to perform multiple iterative updates? Do additional steps help or hinder performance? 3. Does the Bayes-optimality of one-step updates hold in 1-layer multi-head attention? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful engagement with our work, particularly their recognition of our work's **originality** in studying in-context denoising, its **significance** in demonstrating transformers performing optimal denoising updates beyond exact pattern retrieval, and its **clarity** in explaining the proposed approach. We believe these findings establish fundamental connections between attention architectures and Dense Associative Memory networks through Bayes-optimal denoising -- providing insights that extend beyond incremental advances while also establishing a bridge to the study of in-context learning. **Re: Positioning and Contribution** Our paper makes a specific contribution beyond prior work connecting transformers and Hopfield networks: establishing that for certain denoising tasks, a single attention step can be optimal from a Bayesian perspective. This provides theoretical justification for why the "one-step correspondence" noted by Ramsauer et al. (2020) works effectively. Our key insight is that in-context denoising connects associative memory retrieval, attention, and ICL in a common Bayesian framework. We demonstrate that: - For certain denoising tasks, one-layer transformers can represent Bayes-optimal predictors - Trained attention layers correspond to single gradient descent steps on a context-dependent energy landscape - This single step can outperform multiple iterations, challenging conventional associative memory paradigms. This extends beyond the common view that associative memories must converge (often through multiple iterations) to be effective. **Re: Relation To Broader Scientific Literature** We thank the reviewer for highlighting important related works. We will incorporate these references, particularly Santos et al. (ICML 2024), Wu et al. (ICML 2024), Wu et al. (ICLR 2024), and Hu et al. (NeurIPS 2023) on sparse modern Hopfield networks. We'll also discuss connections to implicit gradient descent literature mentioned by the reviewer. These works complement our findings while our contribution on optimal one-step denoising adds a new perspective. **Re: Equation referencing** We are open to doing so (especially if it is an ICML guideline), but regard it as a style choice, as readers may mention specific equations even if they aren't referenced within the text. **Re: Strengths and Weaknesses** Regarding our energy function differing from standard DAM: This modification was deliberate and essential for our denoising tasks. The Lagrange multiplier term handles continuous state spaces more naturally while maintaining core associative memory dynamics, similar to regularization approaches in other energy-based models. As for additional iterations degrading performance: This is a key finding, not a limitation. Traditional DAMs aim to retrieve patterns exactly, but in-context denoising requires blending information from context tokens with the corrupted query. Figure 5 demonstrates why a single step is optimal for certain denoising tasks, providing theoretical insight into when one-step updates outperform iterative approaches. **Re: Why not higher score?** We respectfully suggest reconsidering our contribution: - Our paper provides fundamental insights into why single-step attention can be optimal for denoising tasks, moving beyond merely refining known connections and establishing the ICL connection. - Our focus on simplified synthetic scenarios establishes rigorous theoretical foundations, a common approach in theoretical ML papers. - The impact extends beyond specific scenarios studied. By establishing when single-step updates can be optimal, we provide insights informing design and analysis of more complex transformer architectures. **Re: Questions for Authors** - **(1) & (3):** We've shown that for elementary denoising tasks, the optimal solution is a one-layer, single-head attention architecture viewable through the DAM lens. For more complex tasks, multi-layer/multi-head architectures likely become necessary for Bayes-optimality -- an exciting direction for future research. - **(2):** This important question motivated Section 4. Traditional views suggest convergence via iterative attention updates are beneficial. Fig. 5 shows how one-step outperforms multiple iterations in our denoising task. Multiple steps push the query toward fixed points that depend on random sampling, causing the system to 'forget' query information - sub-optimal for denoising. This insight reinforces the utility of the single attention step: it strikes an optimal balance between query and context information. We will discuss this in the revised text. -- The reviewer's thoughtful feedback and insightful questions are much appreciated. We believe that addressing these points will strengthen the paper while maintaining its core contribution: solidifying the connection between attention mechanisms and associative memory through a novel in-context denoising setting.
Summary: The paper considers in-context denoising as a fundamental task of attention (when applied in a prompt-conditioned, auto-regressive manner). When interpreted this way, there are clear connections to Dense Associative Memories (DAMs), and a model trained on a final-token denoising task can approach the optimal Bayesian prediction when knowledge of the true manifold is known in advance. Claims And Evidence: Claims are supported by simple experiments and theoretical analyses. Methods And Evaluation Criteria: The methods and evaluations are sound. It would be nice to see experiments on real data, not only toy data for which we know the underlying distribution $p_\mathcal{X}$, see weaknesses. Theoretical Claims: I did not review the proofs in the appendix for correctness, but I saw no glaring issues in the propositions of the the main paper. Experimental Designs Or Analyses: The experimental designs are sufficient to test the claims in this work. Supplementary Material: I reviewed Appendix D which describes their minimal transformer architectures they study in the main paper. I also reviewed Appendix G on the connections between transformers and AMs. I skimmed many of the proofs only to help me understand the claims in the paper, but I did not evaluate them for correctness. Relation To Broader Scientific Literature: This paper contributes to the growing body of evidence that treats Transformers (specifically, the attention operation of Transformers) as a form of Associative Memory. It also builds on a growing interest in formalizing ICL as a form of memory lookup given the previous tokens in the sequence. Essential References Not Discussed: A very related but undiscussed work is the [Energy Transformer](https://arxiv.org/abs/2302.07253), which formalizes the entire Transformer block as a kind of DAM. Specifically, the paper contains several statements that are explicitly addressed by the Energy Transformer: > [L068-070] why does the Ramsauer et al. (2021) correspondence involve only one iteration of Hopfield energy minimization and not many? > [L397-399] We have also noted preliminary connections between our work and other architectural features of modern transformers, namely layer normalization and residual streams, which warrant further study. > [L949-952] taking K recurrent updates could be viewed as the depthwise propagation of query updates through a K-layer architecture if one were to use tied weights. Analogous residual streams are commonly utilized in more elaborate transformer architectures to help propagate information to downstream attention heads. Additionally, the work of [Ambrogioni](https://arxiv.org/abs/2309.17290) should be mentioned alongside [Hoover et al.](https://arxiv.org/abs/2309.16750) on [L412] Other Strengths And Weaknesses: ### Strengths - **Well written**. Clear exposition and motivations makes this paper a pleasure to read. The figures are generally complete and self-descriptive (though a few details can be clarified, see questions). - **Strongly defended connection of ICL to Dense AM**. The idea that ICL (and indeed, much of the operation of transformers) can be studied through the perspective of associative memory and gradient descent on the energy landscape has been growing lately, and this paper continues that line of works by thoroughly describing the "next token denoising" task as an energy minimization problem that approaches a Bayes optimal answer. ### Weaknesses - **Limited scalability and connection to real transformers**. The ideas in the paper are good, but the sandbox in which they test the ideas is small. The experiments are on toy data where the underlying true distribution is known (*how can this work be extended when the true underlying distribution is not known?*), and they only use a single attention head for a single transformer layer. They also do not consider the case where "interesting parameterizations" of $W_{KQ}$ and $W_{PV}$ may be necessary to solve the task at hand (in their experiments, the training tasks cause these parameters to converge almost to identity). Additionally, the update rule they describe discusses only the denoising of the $L+1$ token, when the power of transformers today comes from their parallelizability, allowing them to evolve all tokens in the input simultaneously. Also, as far as I can conclude, all experiments and propositions treat the input tokens as a set of tokens instead of a sequence of tokens with positional information, as is needed for real tasks. ### Summary I like the paper and I think it is well written. The setting in which they choose to study the model, though quite limited, is appropriate for evaluating the claims in the paper. It would be nice for the authors to include some real-data experiments to make it easier for others to build off the work, but I believe the content in this paper is of sufficient completeness, correctness, novelty, and quality to be accepted to ICML. As always, I am happy to increase my score during the review process if the authors can clarify my questions and concerns Other Comments Or Suggestions: **Typos** - Is there an error in Appendix E's taylor expansion of the softmax [L814] and [L822]? It seems like a math symbol did not render correctly. Questions For Authors: Q1: Fig 1 is very helpful, thank you. But it is still unclear what is included in the prompt? Are all blue dots included in the prompt? It seems that the goal of ICL is to project the noisy "test token" ($\tilde{x}$}) back onto the manifold encoded by the context prompt $E$, but I am still not clear if there is any "sequential information" (e.g., token positions $[1,\ldots,N]$) in the prompt? Q2: Fig3a Case 2 -- why is the bayes optimal predictor have higher loss than the softmax train? Is this "overfitting" to the training data? Also, is there a reason you only test linear attention on the linear manifold case instead of including it in the non-linear and GMM results? Q3: I find it very interesting that the optimal weights for a single transformer layer/head can be expressed as scaled identity matrices. Could one interpretation of this idea be that the optimal predictor for this task is trying to "attract" the noisy token to the prompt tokens themselves (and not some transformed version of the prompt tokens)? In which case, it is of no surprise that the optimal parameterization leads to identity matrices. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thoughtful review describing our paper as 'a pleasure to read' with 'clear exposition and motivations.' We're encouraged by the recognition of our 'strongly defended connection of ICL to Dense AM' and we are particularly thankful that the reviewer found the paper to be of 'sufficient completeness, correctness, novelty, and quality to be accepted to ICML.' **Re: Strengths and Weaknesses** We thank the reviewer for their thoughtful feedback on the paper's limitations. While we acknowledge the controlled nature of our experimental setting, this design was deliberate to establish clear theoretical connections and optimal bounds. - **Synthetic data:** This choice was necessary to derive Bayes-optimal predictors as theoretical baselines. Our findings suggest that with sufficient context, transformers can learn to denoise even when the underlying distribution is not fully known a priori (Fig. 4). - **Single-layer architecture:** We focused on minimal architectures to isolate fundamental connections between attention and DAM in a novel ICL setting. Our results suggest deeper models might implement multiple gradient steps on a complex energy landscape, and it will be very interesting to explore how this interacts with related but fundamentally distinct efforts including the Energy Transformer (mentioned below by the reviewer.) - **Identity weights:** While our tasks led to scaled identity weights, we've begun exploring settings with non-isotropic covariance structures that require more complex parameterizations for $W_{KQ}$. - **Single token denoising:** We deliberately focused on denoising a single token to establish the clearest connection to DAM. Extending to multiple tokens and incorporating sequential information represents important future work. We view these limitations not as weaknesses of our approach but as exciting avenues for future research that can build upon the theoretical framework we've established. **Re: References** We regret missing the Energy Transformer (ET) citation and are grateful to the reviewer for highlighting it. We will certainly discuss it. Briefly, while both works explore connections between transformers and DAM networks, their approaches differ fundamentally. Our work examines in-context denoising and demonstrates why a single-step update can be optimal from a Bayesian perspective. In contrast, the ET tackles dataset-specific tasks via multiple iterations using a specialized but elegant design based on Hopfield models. The approaches are complementary but reversed: ET begins with an energy function to construct its architecture, whereas our work shows that standard attention mechanisms naturally learn to perform a gradient step on a context-aware DAM energy landscape. By starting from a vanilla attention layer in a minimal setting -- where true distributions are known and theoretical insights can be gleaned -- our contribution concretely explains why the Ramsauer correspondence involves only one iteration of energy minimization rather than many. We likewise will cite Ambrogioni alongside Hoover et al. in the revision (thank you!). **Re: Typo in Appendix E** We reviewed and did not find a typo there, but we will change $\mathbb{1}$ to $\mathbb{1}_L$ for notational consistency. **Re: Questions for Authors** - **(Q1)** The blue tokens in Fig. 1 represent the $L$ "pure" tokens sampled from $p_{X}(x)$. These form the context portion of the prompt, with the query being the corrupted $(L+1)^\textrm{th}$ token. Our formulation deliberately focuses on tasks where token order isn't relevant to maximize clarity in the transformer-DAM connection without positional embedding complexities. Extending this to sequence-dependent tasks (e.g. trajectory inference in dynamical systems) represents an exciting direction for future work. - **(Q2a)** The apparent outperformance in Fig. 3a (Case 2) represents mild training set overfitting. Test set performance remains bounded by the theoretical optimum. - **(Q2b)** While theory indicated softmax attention was appropriate for nonlinear manifold and mixture cases, linear attention also performs well. We can add a supplementary figure showing these additional results. - **(Q3)** This interpretation is insightful. The scaled identity weights suggest optimal operation attracts the noisy token toward context tokens themselves rather than transformed versions. This aligns with our finding that the network performs gradient descent on a DAM energy landscape where context tokens serve as stored patterns. We've also explored minimal generalizations where $W_{KQ}$ that are not scaled identity arise from more elaborate covariance structures. -- We thank the reviewer again for their valuable insights and constructive feedback towards improving the work. We hope our responses have addressed their questions and concerns while further clarifying the contributions of our work. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I believe this is a novel work of high quality that is worthy of acceptance to ICML. I am increasing my score accordingly. Best of luck --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for their positive feedback. We will try our best to incorporate their suggestions in the camera-ready version if accepted.
null
null
null
null
null
null
Demeaned Sparse: Efficient Anomaly Detection by Residual Estimate
Accept (poster)
Summary: This paper proposes a novel test for detecting anomalies in structural images using a discrete Fourier transform (DFT) under a factor model framework, which enables interpretable and effective reconstruction-based anomaly detection. Claims And Evidence: To my understanding, the claims are articulated clearly enough to be comprehensible. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the anomaly detection problem at hand. Theoretical Claims: Yes. Most of the theoretical proofs are logically sound. However, some aspects are controversial. On page 3, line 154, the question arises as to why the first statistic is weakly convergent and, after integration, becomes convergent in distribution. Experimental Designs Or Analyses: Yes. Most of the experimental design and analysis are reasonable. The regularization coefficient alpha has not been discussed. Therefore, the results of incorporating additional values for alpha should be considered. Supplementary Material: No supplementary materials have been submitted. Relation To Broader Scientific Literature: This paper adopts a factor model commonly used in time series analysis in economics. Building upon existing methods for detecting structural changes in time series data, this paper proposed a test which incorporates a multi-DFT dual cross-sectional dimension of positional information for image anomaly detection which also provides the relevant asymptotic theory. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: - This paper provides a relatively complete theoretical framework and derivations, particularly matching the properties of the asymptotic theory with the practical issues. The proofs are based on a well-established factor model framework, and the related theory is fairly comprehensive. - The module proposed based on the test is simple and quick to implement, with validation provided regarding its resource consumption. - The experimental results validate the effectiveness and consistency of the theory. Weakness: - As I mentioned above, the experimental design could be further improved. - This paper requires more detailed explanations. For example, on page 3, in the section on constructing the complex-valued empirical process, it is not explained why the common factors and residuals are combined together. However, this is actually used in the later asymptotic theory section. Another issue is that the title of the paper mentions residual estimation, yet the theoretical section also incorporates common factors. The authors have many ideas to convey, but the details are not sufficiently thorough. - The sparsification method proposed in the manuscript appears to offer several advantages; however, these benefits need to be supported by more detailed experimental analyses. Currently, the manuscript does not include experiments examining the effects of different sparsity levels on model performance. Adding such experiments would provide valuable insights into how varying degrees of sparsity influence performance outcomes. This additional analysis would significantly strengthen the manuscript and validate the proposed method's effectiveness. - The description of the method in the paper is not very intuitive. It would be helpful to summarize the method into a structured algorithm, presented in pseudocode format. Other Comments Or Suggestions: Some minor problems: - The notation in Equation 9 seems unusual since both loss terms are labeled as L_reg. - There are similar typos in the paper. Please review and correct them accordingly. - Whether the module constructed by this method potentially be applicable to a broader range of anomaly detection problems, including non-reconstructive anomaly detection methods? Questions For Authors: See comments above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 2ntV: Firstly, we express our gratitude for your thorough review and insightful comments on our paper. Your recognition of the novelty and contribution of our work is greatly appreciated. We also greatly appreciate your rigorous and professional feedback. Your suggestions are highly meaningful, especially for a theory-driven paper like ours. We have taken your feedback as an opportunity to further refine our manuscript. We address your concerns below: >Q1:” On page 3, line 154, the question arises as to why the first statistic is weakly convergent and, after integration, becomes convergent in distribution” Thank you for raising this question. Due to space limitations, we did not provide a detailed explanation. In our initial assumption, the Fourier transform is not performed over the $ \mathbb{R} $, but rather over a finite interval. However, our integration is carried out over the entire real domain, which corresponds to the weak and strong convergence relationships in functional analysis. This setting is also valid for the weight function $W$. >Q2:” The regularization coefficient alpha has not been discussed” We have added experiments regarding the regularization coefficient $\alpha$ and provide some of the results here: | $\alpha$ | Ours | | :---: | :----------------------------------------: | | 1e-2 | 95.50 / 97.10 / 89.27 | | 1e-3 | 95.45 / 97.04 / 88.74 | | 1e-4 | 96.19 / 96.58 / 89.01 | | 1e-5 | 97.92 / 97.78 / 92.42 | | 1e-6 | 98.69 / 98.32 / 93.24 | | 1e-7 | 97.79 / 97.98 / 91.36 | | 1e-8 | 97.03 / 96.97 / 89.32 | >W1:” on page 3, in the section on constructing the complex-valued empirical process, it is not explained why the common factors and residuals are combined together”and” this is actually used in the later asymptotic theory section. Another issue is that the title of the paper mentions residual estimation, yet the theoretical section also incorporates common factors” The purpose of constructing the complex-valued empirical process by combining the common factors and the residuals is to unify the estimated quantities within the test statistics. Additionally, combining the common factors and the residuals helps prevent the residuals from summing to zero. In the process of estimating the residuals, the common factor $F$ is also involved, but $F$ differs across theoretical framework assumptions (as we mentioned in Section 3.1, maybe is not based on the factor structure). However, $F$ can be expressed in the form of residuals in all cases. Therefore, the estimation of the residuals is a central component of the complete theory, with $F$ serving as an intermediate variable. >W2:” the manuscript does not include experiments examining the effects of different sparsity levels on model performance” Thank you for raising this question. In our theoretical section, our asymptotic theory suggests that as long as the estimated values are smaller than the true values, the anomaly detection task will be effective. In the experimental section, we constrain the sparsity using the regularization coefficient. Since we are applying this in an unsupervised setting, the level of sparsity is learned by the model itself. We believe that the sparsification operation enhances the effectiveness of the anomaly detection task, but the degree of sparsity cannot provide a precise bound. This is because the sparsity degree may vary for different types of unknown anomalies. Nevertheless, this raises an insightful question, and we will continue to explore this in our future research. >W3:” Writing suggestion” Thank you for carefully pointing out our small mistake. We will make the correction in the subsequent version. Additionally, we will provide the pseudocode of our method to make it more intuitive and clearer for the readers. >Q3:” Whether the module constructed by this method potentially be applicable to a broader range of anomaly detection problems” We have validated our method on multi-task tasks, and the results are as follows. Theoretically, our method remains effective in more anomaly detection methods. Here, we attempt to apply our method to the multi-class task on MvTec-AD dataset: ||Ours|Ours-Base| |:---:|:---:|:---:| |Average|93.3/94.6/85.9|58/70.8/39.3| Compared to the baseline method, our approach still shows a significant improvement in the multi-class task. We will follow up on this and conduct corresponding tests in the subsequent versions. We sincerely hope that our clarifications above have increased your confidence in our work. We will be happy to clarify further if needed. We thank you again for sharing your valuable feedback on our work.
Summary: This paper proposes a reconstruction-based method for anomaly detection, by using the construction of a mask in the Fourier domain to sparsify the information by reducing the number of estimated common factors of the input images. Then a U-Net is used to reconstruct the images, using the reconstruction error as the anomaly score. Experiments are conducted on MVTec AD and VisA datasets. Claims And Evidence: Not always. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: [a] OCGAN: One-class Novelty Detection Using GANs with Constrained Latent Representations. CVPR 2019. [b] Attribute Restoration Framework for Anomaly Detection. TMM 2020. [c] A Unified Model for Multi-class Anomaly Detection. NeruIPS 2022. Other Strengths And Weaknesses: The proposed method is simple yet effective to improve the reconstruction-based anomaly detection. However, 1. Lacking important reviews in the literature. At the very beginning, OCGAN [a] proposed to add Gaussian noise to the input, while ARNet [b] used some transformations to erase some important attributes in the images, which have a very similar idea to this paper. Then, some MAE-based methods used masks to remove part of the inputs, transferring the reconstruction task to image inpainting. Finally, UniAD [c] illustrated that traditional AE failed in AD because of the identity shortcut of the model, by simply using the transformer-based architecture, the model's generalizability can significantly be improved, and can even be used for all tasks with only one model. However, this paper still focuses on one-model-one-task. 2. The theoretical part and the method design part are very disconnected. 3. The paper is very incremental. In terms of theory, the author's contribution is only to simply extend the existing one-dimensional theory to two-dimensional space. 4. Also, in terms of experimental design, it seems that only DFS Module was added on the U-Net. What if the DFS Module is added on the other architectures, such as UniAD? [a] OCGAN: One-class Novelty Detection Using GANs with Constrained Latent Representations. CVPR 2019. [b] Attribute Restoration Framework for Anomaly Detection. TMM 2020. [c] A Unified Model for Multi-class Anomaly Detection. NeruIPS 2022. Other Comments Or Suggestions: See strengths and weaknesses. Questions For Authors: See strengths and weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer a5G4: We appreciate your review of our paper and the insightful comments you provided. We are glad to hear that you find the method we proposed to be both simple and effective. We have addressed the issues you raised as follows: >W1:” Lacking important reviews in the literature” As mentioned in your comments, we will include a related literature review in the subsequent version. We greatly appreciate your thorough feedback. Unlike the papers mentioned, which operate at the feature level, our method performs a demeaned Fourier transform in the frequency domain and provides a complete theoretical framework and proof, which is not mentioned in previous literature. (We were motivated by the application cases and abstract it into a theoretical model, based on this, we further give the Asymptotic theory of our model. ) Regarding the comment "However, this paper still focuses on one-model-one-task," since our contribution is mainly focus on the theory filed, our initial approach was to validate it on benchmark problems. However, your comment has been very insightful. Based on your suggestion, we have attempted to apply a multi-task, one-model approach on MvTec-AD dataset: ||Ours|Ours-Base| |:---:|:---:|:---:| |Average|93.3/94.6/85.9|58.0/70.8/39.3| Compared to the baseline method, our approach still shows a significant improvement in the multi-class task. We will refine the complete theoretical framework and methodology in future work. Thank you for your constructive suggestions. >W2:” The theoretical part and the method design part are very disconnected” We thank the reviewer for raising this question, as it will help others better understand our work. (Maybe some of our expressions in writing let you not clearly see our writing logic, but in fact, the theoretical derivation of the previous sections and the application of the later sections are closely related, you could see when apply the DFT method, our theory clearly explains why the applied method works). As we mentioned in Section 4, the decentralization operation corresponds to the construction of the complex-valued empirical process in our theoretical section. It simultaneously constructs the test statistic, which helps us derive the asymptotic lower bound. Based on our asymptotic theory, in order to establish the quantitative relationship ($K<R$) between the true common factors and the estimated values, we designed the sparsification operation. This operation weakens the main information (non-anomalous parts) in the original image, causing the residuals (anomalous parts) to occupy a larger proportion, making anomalies easier to detect. The experimental results based on the above operations validate our asymptotic theories. >W3:” The paper is very incremental” Thank you for raising this question, In terms of theory, our contribution is not simply an extension of existing one-dimensional theories to two-dimensional space. We also provide a complete asymptotic theory along with the corresponding proofs. This method and its applications really make sense: first, it requires the development of new hypothesis tests, complex-valued empirical processes, test statistics, and upper and lower bounds for the asymptotic theory. Second, unlike the one-dimensional time series theory, the theory at the two-dimensional image level requires spatial relationships to serve as the central element of the theory. Finally, the proof section involves new derivations and experimental validation of the constructed asymptotic theory, with validation metrics and methods that differ from those in the one-dimensional time series scenario. We are committed to constructing a complete theoretical framework, not merely extending the methods and theories. >W4:”it seems that only DFS Module was added on the U-Net. What if the DFS Module is added on the other architectures, such as UniAD?” We thank the reviewer for raising this question. Since our main contribution is the theoretical part, it is essential to validate the effectiveness of the theory and its corresponding asymptotic properties using a simple model, this is very common in the Monte Carlo simulation section of such theoretical articles, and our main aim is similar. This allows us to eliminate the additional effects introduced by complex models. Therefore, we opted for the relatively simple U-Net architecture. As pointed out in your comment, adding the DFS module to the UniAD architecture should indeed be effective. However, the UniAD framework operates at the feature token level, which conflicts with our frequency domain operations. This results in our module not being directly applicable to the UniAD framework. We appreciate your valuable suggestion, which will be of great help for our future work. We sincerely hope that our clarifications above have increased your confidence in our work. We will be happy to clarify further if needed. We thank you again for sharing your valuable feedback on our work.
Summary: The paper "Demeaned Sparse: Efficient Anomaly Detection by Residual Estimate" proposes a novel approach for unsupervised anomaly detection in structural images using a factor model framework combined with Discrete Fourier Transform (DFT). The authors introduce a test to detect anomalies by analyzing weighted residuals in the Fourier domain, comparing them to a zero spectrum under a null hypothesis of no anomaly. They develop the Demeaned Fourier Sparse (DFS) module, which constructs masks in the Fourier domain to enhance reconstruction-based anomaly detection. The method leverages residuals to identify anomalies without requiring prior knowledge of anomaly types, offering both theoretical rigor (via asymptotic properties) and practical applicability. Experimental results on datasets like MvTec-AD demonstrate competitive performance in anomaly detection and localization, with the approach being computationally efficient compared to some existing methods. ## Update after rebuttal After reviewing the authors’ rebuttal, I appreciate the effort they’ve put into addressing the feedback. Their responses have largely alleviated my concerns, and despite some differing perspectives, I will maintain my original score. Claims And Evidence: - **Strengths:** - The paper’s theoretical backbone is a standout feature. Section 3 delivers a rigorous statistical framework through asymptotic properties, such as Theorems 3.2, 3.4, and 3.6, which mathematically define how residuals behave under normal and anomalous conditions. The derived detection rate of $H^{−1/2}W^{−1/2}$ offers a concrete lower bound for spotting subtle anomalies, marking a meaningful contribution to the field. - Experimental evidence supports its claims effectively in several cases. The results show the method can outperform established approaches like PatchCore and DRAEM in specific scenarios, suggesting practical potential. - **Weaknesses:** - The connection between theory and practice isn’t fully realized. The asymptotic properties assume large image dimensions (H, W → ∞), but typical dataset images (e.g., 256x256 in MvTec-AD) don’t meet this scale. The paper doesn’t bridge this divide with empirical validation, such as testing on varying image sizes. To strengthen its case, it could have emphasized how its theoretical edge translates into tangible benefits, making the evidence more compelling against competitors. Methods And Evaluation Criteria: - **Strengths**: - The methodology shines with its originality, blending a factor model with DFT and introducing the DFS module. This module projects residuals into the Fourier domain and iteratively optimizes masks (e.g., via Bernoulli sampling and sigmoid mapping) to isolate anomalies efficiently. A key advantage is its self-contained residual generation—detecting anomalies using the model itself without external data structures like PatchCore’s memory bank or DRAEM’s dual-network setup. This lean approach enhances its appeal for unsupervised settings. - Efficiency is a clear strength, as evidenced in Table 3. The proposed method significantly undercuts heavier models like SSNF (294.67M parameters, 102.23G FLOPs) and PyramidFlow (162.20M parameters, 81.13G FLOPs), positioning it as a viable option for resource-constrained environments, such as industrial applications with limited compute power. - **Weaknesses**: - Robustness isn’t fully substantiated. The absence of statistical significance tests (e.g., p-values or confidence intervals) for the reported metrics means readers can’t confidently assess whether the performance edge is meaningful or due to chance. Theoretical Claims: The asymptotic theory is a robust intellectual achievement. Section 3’s derivations (e.g., Proposition 3.1, Theorem 3.6) establish how residuals behave under null and alternative hypotheses, providing a statistical lens for anomaly detection. This clarity sets a high bar for theoretical rigor in the domain. Experimental Designs Or Analyses: - Strengths: - The experimental design is well-structured, testing the method on established datasets (MvTec-AD and VisA) against strong baselines like PatchCore and DRAEM. This ensures a fair comparison within the anomaly detection community’s standard benchmarks. - Hyperparameter exploration in Appendix E adds depth. By varying sampling functions and epochs, it demonstrates the method’s robustness and adaptability, giving readers insight into its operational flexibility. - Qualitative results (Figures 6, 7) are visually persuasive, effectively showcasing the method’s ability to pinpoint anomaly locations in images, which aligns with its localization claims. - Weaknesses: - As mentioned in the **Claims and Evidence** section, testing on variable image sizes, if possible, would benefit the paper by demonstrating that the asymptotic property holds in practice. Supplementary Material: - The supplementary material enriches the paper. - Theoretical proofs in Appendix B (though partially accessible) strengthen the core claims, giving readers a deeper dive into the mathematical underpinnings. - Analysis of computational cost in Appendix D is another important part that highlights the advantages of this methodology. - Appendix E’s hyperparameter study provides a detailed look at how choices like sampling functions affect performance, offering practical guidance for tuning the method. Relation To Broader Scientific Literature: - The paper builds thoughtfully on prior work, extending factor models (Fu et al., 2023) and reconstruction-based detection (Zavrtanik et al., 2021) into a new image-focused framework, blending time-series inspiration with visual analysis. - Its Fourier-based approach is a fresh take, diverging from the CNN-heavy norm and offering a frequency-domain perspective that could inspire further exploration. Essential References Not Discussed: Appendix A effectively addresses relevant references, providing a comprehensive overview of related work that ties the method to the current research landscape. Other Strengths And Weaknesses: - Strengths: - The paper is clear and well-written. - The method offers a theoretically robust framework, with rigorous asymptotic properties and a novel Fourier-based approach enhancing anomaly detection. - Weaknesses: - There are some minor issues, which I mentioned in the previous sections. Other Comments Or Suggestions: The paper struggles to clearly showcase its comparative advantages—such as greater efficiency, competitive performance, and a self-contained residual construction that detects anomalies without the need for additional data—over other methods, and it would benefit from more prominently integrating these strengths into the abstract or introduction to convince readers of its practical value. Questions For Authors: N/A. Please check other sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer LTm8, Firstly, we express our gratitude for your thorough review and insightful comments on our paper. Your recognition of the rigor and contribution of our work is greatly appreciated. We also greatly appreciate your rigorous and professional feedback. Your suggestions are highly meaningful, especially for a theory-driven paper like ours. We have taken your feedback as an opportunity to further refine our manuscript. We address your concerns below: >W1:” more test on variable image sizes” We agree with the viewpoint that the asymptotic global theory should be validated on more image sizes, as you mentioned in your comment: "it could have emphasized how its theoretical edge translates into tangible benefits." To address this, we have applied our method to larger-scale scenarios, with experimental results on MvTec-AD dataset presented for 512×512 and 1024×1024 image sizes. Due to space and time limitations, we only provide the average results of metrics. |Resolution|Ours|Ours-Base| |:---:|:---:|:---:| |256 $\times$ 256|98.69/98.32/93.24|95.04/96.59/84.10| |512 $\times$ 512|98.52/96.13/91.69|85.11/89.97/69.24| |1024 $\times$ 1024|97.19/91.49/86.82|81.65/82.83/58.79| results show that our method continues to maintain a high degree of effectiveness and shows greater stability compared to the baseline method. If possible, we will include relevant comparative results in future versions. >W2:” Robustness isn’t fully substantiated.” We agree that statistical significance tests for the reported metrics are necessary, as the main contribution of this paper is statistical theory. Since we apply our method to practical detection problems, more comprehensive metrics are needed to assess overall performance. Specifically, in the validation of the asymptotic global theory (for the entire image), the p-value contribution of a single pixel might be overlooked (p-values are less adaptive to imbalanced data, as they often fail to account for the sparsity of anomalies). Therefore, we prefer to use metrics commonly employed in the anomaly detection field as our evaluation criteria. We provide partial p-values for our method on MvTec-AD dataset. | |Ours| Ours-Base| |:---:|:---:|:---:| |P-value| 0.0276 | 0.0792 | As can be seen, after incorporating our method, The p-value is below the significance level of 0.05, indicating that we can reject the null hypothesis (nonanomalous) with a high probability. If possible, we will include corresponding statistical significance tests for both our method and the comparative methods in future versions. >W3:” As mentioned in the Claims and Evidence section” Please see our response to W1. >W4:”There are some minor issues, which I mentioned in the previous sections” Please see our response to W1 and W2. We sincerely hope that our clarifications above have increased your confidence in our work. We will be happy to clarify further if needed. We thank you again for sharing your valuable feedback on our work. --- Rebuttal Comment 1.1: Comment: After reviewing the authors’ rebuttal, I appreciate the effort they’ve put into addressing the feedback. Their responses have largely alleviated my concerns, and despite some differing perspectives, I will maintain my original score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer LTm8, We thank you for your thorough review of our paper and our response for providing constructive feedback that has significantly contributed to its improvement. Your insights have been invaluable in helping us refine our work. Best regards, Paper9325 Authors
null
null
null
null
null
null
null
null
Benchmarking Quantum Reinforcement Learning
Accept (poster)
Summary: This paper addresses the issue of valid performance comparisons between classic RL and QRL. The authors propose a benchmarking methodology, which is based on a statistical estimator for sample complexity and a definition of statistical outperformance. In addition, a novel RL benchmark is established on which they compare the performance of DDQN and PPO using classical neural networks and VQCs. The results of the experiments show that the quantum variants consistently outperforms the classical version when the number of trainable parameters is similar. Claims And Evidence: The claims seem to be supported for the setup of the experiments. Methods And Evaluation Criteria: I don’t understand why the only benchmark in this paper is a completely new benchmark. Why not using at least one or two well established RL benchmarks like inverted pendulum or cart-pole? I’m not convinced that the new BeamManagement6G benchmark is particularly useful RL benchmark. 1. Is it even an RL problem? As I understand, the agent can select any of the antennas at any time, and there is no planning ahead of the very next step needed. Such problems are called contextual bandits and can be seen as RL tasks with gamma=0. The agent’s only task is to select the best next antenna given the current context (state). Can you explain why discount values of 0.9, 0.95, or 0.99 are needed? 2. The true Markov state is only partially observable. The agent is limited to use only the current values of Antenna, Codebook, and Intensity. But why? If the agent would be allowed to look further into the past it could estimate the direction and velocity of the mobile phone way better. Theoretical Claims: No. Experimental Designs Or Analyses: The hyperparameters used in the experimental design for the RL algorithms are good choices. As mentioned above, I’m not convinced that this benchmark is a good choice for their experiments. Supplementary Material: No. Relation To Broader Scientific Literature: I think the premise of this paper is very interesting and important: find a good common basis to compare RL and QRL methods to yield solid results from which statements about advantages and disadvantages can be derived. Essential References Not Discussed: There are two recent publications in the area of QRL that could be added to the literature review: QRL algorithm with provable advantage: Wiedemann, Simon, et al. "Quantum Policy Iteration via Amplitude Estimation and Grover Search–Towards Quantum Advantage for Reinforcement Learning." Transactions on Machine Learning Research. 2023. Model-based Offline QRL: Eisenmann, Simon, et al. "Model-based Offline Quantum Reinforcement Learning." 2024 IEEE International Conference on Quantum Computing and Engineering (QCE). Vol. 1. IEEE, 2024. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Sometimes “Fig. X” is used and sometimes “Figure X” Figure 1 is placed on the front page but mentioned for the first time on page 6. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the Reviewer for the assessment of our paper. We appreciate the constructive comments of the reviewer and are confident that addressing them has further strengthened the quality of our work. - Regarding **Methods And Evaluation Criteria**: 1. An argument for why the BeamManagement6G environment is particularly meaningful for QRL models is outlined in the rebuttal for Reviewer **jcrL**. To show the validity of our methodology for more diverse environments, we repeat the valuation for the CartPole environment in the revised version of the manuscript, as discussed in the rebuttal for Reviewer **Vo3S** 2. On the formulation of the beam management task as MDP / RL environment please see the argument at the bottom. 3. As the Reviewer correctly points out, including several previous observations in the policy input can provide some (implicit) indication on the UEs direction and speed of movement, thus making action selection easier. We emphasize that treating the information from several previous time steps as observation is exactly what we do. This allows the agent to infer the current direction of its motion. We will clarify this detail in the updated version of the manuscript. - Regarding **Essential References Not Discussed**: Thanks for drawing our attention to these references, we already incorporated them into Section 2 for a updated version of our paper. One comment: While from our perception the first one is targeted for fault-tolerant quantum computing, i.e. a somewhat different setup than we consider, the guarantee of advantages related to sample complexity is quite interesting. - Regarding **Other Comments Or Suggestions**: We adapted the use of referencing to be consistent. We acknowledge that the first referencing of Figure 1 being delayed to page 6 is not optimal and will adapt that. However, we would opt for keeping it on the front page, as we think the plot conveys the core concept regarding our sample complexity estimator. In general, we want to thank the Reviewer again for the detailed suggestions, and are especially happy of the assessment ``I think the premise of this paper is very interesting and important: find a good common basis to compare RL and QRL methods to yield solid results from which statements about advantages and disadvantages can be derived.`` We think that this perfectly summarizes the intention behind our work and hope this or a similar rigorous benchmarking procedure will be established in the realm of quantum reinforcement learning. Finally, we hope that the changes and extensions we discuss above improve the quality of our work from your point of view. --- ## On the Formulation of the Beam Management Task as an MDP/RL Environment: We acknowledge that we did not fully clarify the subtleties of the BeamManagement6G environment and why it constitutes a valid RL benchmark. Our environment models handover management problem (selecting the most suitable base station) not the beam selection problem per base station. As described in Appendix A, the trained policy selects the optimal base station, while a low-level algorithm (outside the policy) selects the best beam from the available codebook at that base station. At each time step, the agent observes the selected base station, the beam ID, and the beam intensity received. The choice of the base station (the chosen action) results in different next states (different base stations lead to different next beam ID and received intensity). Therefore, this setting is not a contextual bandit problem since the state reached after taking an action is dependent on the action taken and by that determines the future state-action sequence. Whether to RL or not to RL (https://arxiv.org/abs/2405.19045, https://arxiv.org/abs/2401.14823) has been discussed extensively for the problem of handover management: Although the policy lacks an explicit planning component (without knowledge of the UE's future movements or the low-level beam selection algorithm at each base station), it must consider the future effects of its actions (i.e., the discount factor $\gamma$ cannot be zero) for several reasons: - Ping-Pong Effects: Incorrect base station selection can lead to lower beam intensities and may require reconnection to the previous base station. Such ping-pong effects hinder network and UE efficiency. - Timing of Handovers: The policy must implicitly determine the optimal timing to connect to a new base station and disconnect from the current one to maintain high beam intensity values, ensuring consistent quality of service during handovers. - Intermediate Base Stations: Although not penalized in this work, connecting to an intermediate base station between two others may provide minor benefits for a few time steps. We also think that the exact definition of the task should not be too important for the main contributions of our paper: We establish the robust benchmarking procedure in an environment-independent way. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I believe that without a penalty for actions in the benchmark, the optimal policy should select the antenna that maximizes the immediate reward. It seems that the paper might primarily serve as a means to promote this new benchmark. Additionally, I am unable to review the promised experiments and results related to the cart-pole task. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their further comments and would like to highlight two additional points: - We want to stress that the main emphasis of the paper is not the environment we use, but the benchmarking methodology including the statistical estimator for sample complexity. The BeamManagement6G environment serves a replaceable example environment (with particular merits for benchmarking QRL as discussed in the Rebuttal for Reviewer **jcrL**). - The results of the CartPole experiments have been uploaded to https://zenodo.org/records/15097065?token=eyJhbGciOiJIUzUxMiJ9.eyJpZCI6ImU0MzE3YTcwLTY3OTMtNDRmOC1iYjJiLWQ0N2EwNTExNTBjNyIsImRhdGEiOnt9LCJyYW5kb20iOiIwNjliOTVjZWM4MTM5NTA1ZTQ4NzhkMjJmZGVlMzU5YSJ9.iIKBA2-L10uYnzHsE4ibgLj3A4OaD73cVatmN-Unu_r78aOP3w8ZEArx1qvjW9C502Wlvo-10E5dYNkfMG52sQ. Further details on the experiments are also outlined in the Rebuttal for Reviewer **Vo3S**.
Summary: The paper introduces a standard benchmarking methodology for quantum reinforcement learning (QRL) algorithms, executed with high statistical rigor. The proposed methodology emphasizes sample complexity as a key metric and introduces a statistical estimator for empirical sample complexity. Additionally, the authors define a notion of statistical outperformance to facilitate fair comparisons between algorithms. The benchmarking framework is applied to the BeamManagement6G environment, and the results are well-visualized, providing clear insights into the comparative performance of different QRL methods. The authors also provide code, ensuring reproducibility. ### Update after rebuttal: The authors have addressed my main concerns effectively. Their clarification on the estimator’s properties, expansion on related RL benchmarking literature, and initial results on an additional environment (Cartpole) strengthen the paper. The open-source implementation and careful framing of quantum advantage claims further support their contributions. Claims And Evidence: The authors claim that their benchmarking methodology enhances reproducibility and standardization in the fragmented field of QRL. This claim is well-supported by the detailed description of their statistical approach, including the empirical sample complexity estimator and statistical outperformance metric. The inclusion of publicly available code further strengthens the credibility of their claims by allowing independent verification. Methods And Evaluation Criteria: The paper employs sound statistical analysis and machine learning techniques to develop a robust benchmarking methodology. The choice of sample complexity as a core metric aligns well with existing evaluation methods in reinforcement learning. The proposed statistical estimator and outperformance metric offer a meaningful and transparent way to compare QRL algorithms. The evaluation is well-documented, and the methodology is appropriate for the problem domain. Theoretical Claims: The paper does not focus heavily on theoretical proofs but rather on empirical validation. The statistical framework introduced for sample complexity estimation appears well-founded. However, a more formal proof or derivation of the estimator’s properties could strengthen the theoretical contributions. Experimental Designs Or Analyses: The experimental setup is well-structured, with clear definitions of benchmarking criteria and performance metrics. The use of BeamManagement6G as a test environment is justified, and the results are well-visualized and easy to interpret. However, further validation on additional QRL environments would enhance the generalizability of the proposed methodology. Supplementary Material: No. Relation To Broader Scientific Literature: The paper builds upon prior work on sample complexity in QRL but offers a novel benchmarking approach with enhanced statistical rigor. While it acknowledges previous efforts in QRL evaluation, a more detailed discussion on how this methodology compares with other benchmarking efforts in classical reinforcement learning would be valuable. Essential References Not Discussed: The paper appears to cite relevant prior work on sample complexity and benchmarking in QRL. Other Strengths And Weaknesses: Strengths: - The introduction of a standardized benchmarking methodology is a significant contribution to QRL research. - The statistical rigor in defining sample complexity and outperformance metrics is commendable. - The clear presentation and well-visualized results make the findings accessible and interpretable. - The inclusion of code ensures reproducibility. Weaknesses: - The paper primarily focuses on BeamManagement6G; validation on additional environments would enhance its impact. - While the empirical approach is strong, further theoretical justification of the proposed statistical estimator would be beneficial. - A broader discussion comparing QRL benchmarking with classical reinforcement learning benchmarks could improve contextualization. Other Comments Or Suggestions: Clarify whether the benchmarking methodology can be easily adapted to other QRL environments beyond BeamManagement6G. Questions For Authors: How does the proposed benchmarking methodology compare to classical reinforcement learning benchmarking approaches in terms of adaptability and standardization? Do you plan to validate this methodology on additional QRL environments, and if so, what are the main challenges in doing so? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the Reviewer for the positive assessment of our paper. We are sure that we can address the comments sufficiently in a camera-ready version of the paper: - Regarding **Theoretical Claims**: You advocated for a more formal proof of the estimator's properties. We have derived key properties like consistency and bias in appendix B, and also empirically analyzed the approximation quality wrt the central limit theorem. - Regarding **Relation To Broader Scientific Literature**: We are working on extending our discussion on related benchmarking efforts of classical RL in Section 2 of our paper. Among others, we now cover works like e.g. https://proceedings.mlr.press/v119/jordan20a.html. Furthermore, we want to address the stated questions: - Currently, there is unfortunately no standardization in QRL. Our work may establish a standard for measuring sample complexity, but more work is needed for other metrics. - Indeed the adaption to other QRL/RL environments is straightforward, as our implementation (available and open-source) makes use of the gymnasium environment structure; as discussed below, we are currently in the process of generating results for the readily available Cartpole environment, for details see below. In general, we want to thank the Reviewer again for the detailed suggestions. We hope that the changes and extensions we discuss above improve the quality of our work from your point of view. --- ## On the extension to other environments: In our initial research we performed experiments on the standard Cartpole environment with vanilla policy gradient. We observed some superiority of quantum models over classical ones, consistent with claims in the QRL community. However, we emphasize that we do not claim this as quantum advantage, as elaborated below: - We appreciate the reviewers' suggestion that including results on more well-known environments would strengthen our paper's clarity and generalization. We have uploaded two figures addressing the Cartpole environment to https://zenodo.org/records/15097065?token=eyJhbGciOiJIUzUxMiJ9.eyJpZCI6ImU0MzE3YTcwLTY3OTMtNDRmOC1iYjJiLWQ0N2EwNTExNTBjNyIsImRhdGEiOnt9LCJyYW5kb20iOiIwNjliOTVjZWM4MTM5NTA1ZTQ4NzhkMjJmZGVlMzU5YSJ9.iIKBA2-L10uYnzHsE4ibgLj3A4OaD73cVatmN-Unu_r78aOP3w8ZEArx1qvjW9C502Wlvo-10E5dYNkfMG52sQ (anonymized) and will incorporate them into the camera-ready version. To explain: - **Figure 1** compares classical and quantum models for the CartPole-v1 environment. Interestingly, the quantum model achieves lower sample complexity across all epsilon-delta configurations. We performed extensive hyperparameter tuning and tested classical models ranging from 30 to ~17K parameters, reporting the best one we identified. While this indicates some superiority, we explain below why this shouldn't be considered quantum advantage. - **Figure 2** provides a cross-section of Figure 1 at a reward of 475—the threshold typically considered as solving the Cartpolev1 environment. It illustrates that the best model depends on the desired success probability and that results may not always be significant. - We want to highlight that, although these plots suggest some superiority of quantum methods, it's crucial to phrase conclusions carefully. We attribute the observed behavior to the inductive bias of quantum models for this problem. In contrast, many current QRL works claim quantum advantage without sufficient statistics and with around 10 runs (the above plots average over 100). Our paper advocates for more rigorous testing of such claims and provides tools to facilitate this. - In conclusion, we believe the Cartpole environment (and most standard RL benchmarks) is insufficient for rigorous testing of QRL. The performance depicted above was achieved with a quantum circuits on only 4 qubits, which are easy to simulate. Moreover, the environment isn't adaptable for meaningful scaling analysis, unlike the BeamManagement6G environment we studied. As summarized in Sections 7 and 8, we cannot definitively answer whether QRL provides actual quantum advantage—this requires further analysis on larger systems. However, we are confident that our work offers valuable tools for such benchmarks, once more efficient quantum circuit simulation methods are available or when execution on quantum hardware becomes possible.
Summary: This paper analyzes quantum reinforcement learning to provide a more nuanced evaluation on the potential for quantum advantage. They introduce a sample complexity metric that aims to tackle the evaluation issues, and perform a number of empirical simulations on a new RL environment to evaluate the potential for quantum RL in different regimes. Claims And Evidence: The core claims of this paper (regarding the nuance required when analyzing quantum RL) are generally clear and convincing. Potential improvements will be outlined below. Methods And Evaluation Criteria: I’m not sure I’m sold on the new environment. I agree it is valuable to have an environment that is small that you can also tune the complexity and scale of readily (although I would say many of the minigrid/gridworld environments fit that niche). The industrial relevance of the environment doesn’t matter to this paper as well. The real issue is that the environment decreases reader comprehension of the main points of the paper. There is no intuition on how hard or meaningful results are in this environment, so it can be hard to really build an understanding of the impact of the comparisons. If the authors are attached to this environment, then at the very least, they should include other environments that the paper they cite use (e.g. the usual CartPole, BlackJack, minigrid, mountain car etc.). This would both help reader understanding by providing the analysis in context of something they already know, and also help verify the generality of the results (if things are only checked on one environment, maybe it’s just a phenomenon of that environment and doesn’t apply to others?). Theoretical Claims: The focus was on empirical work, any theoretical claims are readily checked. Experimental Designs Or Analyses: In general, the experiments are sufficient. Supplementary Material: Yes, everything except the environment details. Relation To Broader Scientific Literature: This paper builds upon the broad literature of QRL which has recently been focused on hybrid approximations using traditional RL algorithms. It suggests a number of improvements in the analysis of this work, but could be more forceful in its suggestions and analysis (as much of the existing literature is a number of problems well beyond what many RL papers have). Essential References Not Discussed: The analysis of RL algorithms is the subject of quite some interest, and it would benefit the authors to contextualize their proposed evaluation more against this backdrop. Additionally, some literature like https://arxiv.org/abs/2006.16958 is missing (especially given this CDF is quite reminiscent of their sample efficiency. Other Strengths And Weaknesses: This paper does a lot of things right, and I think it serves an important role in the QRL literature. However, I think there are some things that are holding it back, but could be readily improved upon. Some of which has already been discussed. I don’t think figure 8 adds much. It just says “classical and quantum both solved it and random doesn’t”, but that was already conveyed in the text (and is generally true of RL environments). I think there is a place here for a connection to a more traditional RL figure. I know there are issues, as pointed out by this paper, on the “iteration vs reward” plots, but it could be good to connect what people know from RL literature to this new framework (for example, if I see sample complexity of [X], how would that reflect itself in reward graphs). This figure could also just be cut entirely. This is not necessarily unique to this paper, but the focus on quantum parameter count could use more justification. What I mean is that, classical parameter count only matters because it is a semi-useful proxy for speed (or really energy, since there might be cases where we can use more processors in parallel and gain more speed at the cost of energy) which is why there’s been a big shift away from parameter counts towards things like FLOPS (also with the advent of modern ML methods, the correlation between param count and speed was beginning to loosen). Now perhaps the same is somewhat true for quantum, more layers does take more time, and more qubits do take more energy to control, but I’m not sure the comparison between parameter counts means much. This is not an argument with model complexity, I agree the plots regarding parameters vs. model complexity are fair. But the general comparison of “small quantum” vs. “small/large classical”. The number of parameters don’t really matter, it’s only the advantage that matters (either via time, or energy, or sample complexity), so the idea of comparing 400 quantum vs 400 classical parameters doesn’t really seem meaningful (because those parameters are doing different things and we are operating in very different regimes of computation). Comparing quantum vs. classical in general is important, but the focus of papers on saying “small Q is better than small C” as if that’s meaningful (like we don’t care about the number of parameters, just about how it performs on those three axes) should be more addressed, specifically focusing the parameter discussion around model complexity and comparisons around sample/time. Other Comments Or Suggestions: From a purely constructive criticism perspective (this has no impact on my evaluation of the paper), I’m not sold on Figure 1. I get what the authors are going for, but it takes up a lot of space and doesn’t actually say much beyond (small C < small Q < big C). I don’t really have an idea for a better figure, but I believe there is one that could have higher information density. I also feel the same about figure 6. It seems like this should just be a table, the visual information of this is conveying only a small collection of discrete points (and the colors/labels don’t offer anything beyond what text would in a table). Questions For Authors: 1. Presumably in figure 7 quantum will saturate too? Also why not do depth rather than width? I see that analysis in Figure 18, but it looks flat, which seems to indicate to my point that quantum parameters aren’t interpretable in the same way. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for the assessment of our paper and are pleased to read that the reviewer thinks that `This paper does a lot of things right, and I think it serves an important role in the QRL literature.` We appreciate the constructive comments and are confident that addressing them has further strengthened our paper. - Regarding **Methods And Evaluation Criteria**: We agree with the reviewer that it can be difficult for the reader to get intuition on e.g. the hardness of the BeamManagement6G environment. Nevertheless, we believe that the environment is particularly meaningful for benchmarking RL vs. QRL as we argue in the rebuttal for Reviewer **jcrL**. Following the reviewer's suggestion we included the application of our evaluation methodology to the CartPole environment, see the rebuttal for Reviewer **Vo3S**. - Regarding **Relation To Broader Scientific Literature**: In the revised version of the manuscript we added a paragraph were we clearly state suggestions on how to achieve robust and reproducible benchmarking results based on the methodology we introduced and stress its importance for standardization and to avoide false claims. - Regarding **Essential References Not Discussed**: We have extended our discussion in Section 2 with the suggested paper and are incorporating additional references on benchmarking classical RL (e.g. arXiv:2402.03046, arXiv:2209.12016) - Regarding **Other Weaknesses**: 1. We agree, that Figure 8 does not add much value. We think it makes sense to move it to the appendix--depending on space--and instead incorporate the above mentioned results on Cartpole into the main part. 2. We agree with the reviewer that focusing on parameter count isn't always adequate; e.g. the difference between models with 387 quantum and 4,611 classical parameters is less relevant due to simulation overhead (since training the quantum model takes more time than the classical model). Indeed the parameter count by itself is of limited value, only metrics like e.g. energy, time and sample complexity matter. The key point we want to convey is that parameter count can be used to define a sequence of models in model space (assuming additional hyperparamter search for given parameter count). This sequence can then be used to identify potential trends for example on the sample complexity axis as we scale to more powerful quantum models. We made this point more precise in the modified version of the manuscript. Comparing parameter counts can still have merit also for ther axis like time and energy, because: - Using parameter count as a proxy (e.g., in Figure 7), we speculate that scaling quantum models may provide a net gain in sample complexity. Validating this requires larger-scale empirical analysis beyond our computational resources. This limitation explains why most QRL work focuses on fewer than 10 qubits, and our paper questions advantage claims in this regime. - For complex problems, large classical models become expensive and time-consuming to train. If similar performance would be achievable with significantly fewer quantum parameters, executing quantum circuits could potentially be more efficient in terms of time and cost as quantum hardware matures. - Regarding **Other Comments Or Suggestions**: We agree that the information content of Figure 1 and 6 is limited, especially as both describe the same setup. With the new empirical results on the Cartpole environment, we can replace one of the figures with a new one providing more intuition. - Regarding the **Questions**: 1. We acknowledge this valid concern. Saturation is expected but might occur at lower sample complexity for quantum models than for classical models. Though speculative, the performance gap to the best classical model at higher qubit counts is small. The 14-qubit model pushed our simulation capabilities to the limit due to the 100 runs needed per model for robust analysis; simulating the circuit is feasible, but gradient computation and training are very expensive. Nearly a quarter of the 40,000 compute hours were devoted to these systems. However, our work provides benchmarking techniques, once more efficient simulations or suitable quantum hardware is available. 2. We focus on scaling qubit number since performance saturates quickly with increased depth, as you pointed out. While quantum parameters certainly require different interpretation than classical ones, we do not think that this can be stated based on Figure 18. Indeed, also with scaling of classical depth (hidden layers) the performance saturates quickly, i.e. we see the same behaviour as for the quantum models. We thank the Reviewer again for the detailed suggestions. We think that currently there are a lot of issues in the QRL literature, of which we want to address at least a small fraction. We hope the discussed revisions and extensions improve the relevance and quality of our work from your perspective. --- Rebuttal Comment 1.1: Comment: I appreciate the authors response. I think they have addressed many of my issues, and the changes they will make will improve the paper. The results from Vo3S response (on CartPole) are very influential for these improvements. I understand the motivation for the new environment, although I still remain hesitant on its necessity (although new environments often are the subject of their own papers, e.g. https://arxiv.org/abs/2402.16801 I suppose this environment is too small). In general, it seems like a confounding variable in otherwise important results. But I understand that a small environment with flexible scaling might be a gap that is needed, so as long as the environment is well maintained and integrated and easy to use in existing QRL gym workflows, I suppose it can serve a role. Looking at other reviewers, this seems like a common thread, so hopefully there can be some collective agreement and resolutions to be found. I have updated my score to reflect these improvements. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for acknowledging our rebuttal and their suggestions in the original review. We agree that the added Cartpole results significantly strengthen our paper. Regarding the points on the environment: - Encoding into quantum states introduces specific requirements satisfied by few existing environments; for example, the environment in arXiv:2402.16801 would be challenging due to its image representation. - We have detailed the availability of our BeamManagement6G environment in the data availability statement of the camera-ready version. It strictly follows the Gymnasium API and should be straightforward to use in other work.
Summary: The paper presents a benchmarking methodology for quantum reinforcement learning (QRL) by introducing a statistical estimator for sample complexity and a robust definition of statistical outperformance. Through experiments in a novel benchmarking environment inspired by wireless communication tasks, the study finds that hybrid quantum-classical models can outperform similarly sized classical models but do not definitively surpass larger classical models. The results highlight the need for statistically rigorous evaluations in QRL and question some claims of quantum advantage. The study concludes that while QRL shows promise, its superiority remains uncertain and requires further large-scale empirical investigation. Claims And Evidence: The paper provides robust statistical evidence—through extensive experiments and resampling techniques—for its benchmarking claims, particularly that hybrid quantum–classical models can achieve lower sample complexity than similarly sized classical models. However, the broader claim of a definitive quantum advantage is less convincing. Specifically, the comparison may not fully account for optimal tuning of classical baselines, and the experiments are limited to small-scale, simulated environments. This makes extrapolating the results to real-world, large-scale problems somewhat speculative. Methods And Evaluation Criteria: The proposed methods and evaluation criteria—such as the statistical sample complexity estimator and the BeamManagement6G benchmark—seem well-suited for comparing classical and quantum reinforcement learning, however the reviewer is not familiar with the relevant literature. Theoretical Claims: Did not check. Experimental Designs Or Analyses: Experiments seem sound, though comparison to a broader range of RL updates should be considered. Supplementary Material: Did not check. Relation To Broader Scientific Literature: Reviewer is not familiar with quantum literature. Authors should perform more experiments with diverse RL updates to determine how they vary in the quantum setting. Additionally, broader discussion of baseline tuning is required. Essential References Not Discussed: Reviewer not sufficiently familiar with quantum literature to comment. Other Strengths And Weaknesses: The paper is commendable for its creative integration of quantum and classical reinforcement learning through a robust, statistically grounded evaluation framework, and its application to a real-world inspired benchmark enhances its relevance and clarity. However, the study's limited experimental scale and potential under-optimization of classical baselines may restrict the generalizability of its conclusions. Other Comments Or Suggestions: None for now. Questions For Authors: None for now. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for the assessment of our paper. We appreciate the constructive comments made by the reviewer and believe that addressing them has further strengthened our work. - Regarding **Claims and Evidence**: 1. We agree on the concern, that the claim of *definitive quantum advantage* is not supported by the empirical results of our work. However, this is not something we want to claim, as we see our paper also as critique on the large collection of literature that makes such statements without rigorous statistical analysis. We also explicitly discuss this in Section 7. 2. Regarding the speculative extrapolation to larger, real-world environments: We acknowledge that the statements on scaling with larger quantum models in Section 6.2 are formulated too strongly. We will make clear that the extrapolation is speculative, and further empirical investigation is required to make it more definitive (for which we unfortunately currently lack the compute resources, as training models with over 14 qubits requires upwards of 1 day per run) 3. Regarding tuning of the optimal baseline: Of course we can not guarantee that the classical baseline is optimal wrt all hyperparameters. However, we conducted an extensive hyperparameter search summarized in Appendix C.1 (2160 hyperparameter configurations for DDQN, 1800 for PPO), which should give reasonable backing for our claims. - Regarding **Experimental Design Or Analysis**: Overall we consider three *standard* RL algorithms, i.e. VPG, DDQN, PPO; while of course there exist many more, our paper does not focus on benchmarking a broad range of algorithms, but rather provides the overall framework; extension to other algorithms and environments should be straightforward also for third-party researchers with the provided open-source code. An extension to the standard Cartpole environment is discussed in the rebuttal for Reviewer **Vo3S**. Overall, we emphasize that our paper should not be seen as yet another one claiming quantum advantage on some task with some model. We again thank the author for pointing out that currently there are formulations in the paper that suggest otherwise. Rather, our work provides the tools to establish robust statistical backing for such comparisons between classical and quantum models. --- ## On the use of the BeamManagement6G environment: Our motivation for considering only a single (new) benchmark environment is the focus of our work on establishing a benchmarking methodology rather than on solving specific problems. - We think that the BeamManagement6G environment has its justification for our work. First of all, environments like Cartpole or Mountain Car are certainly a good benchmark, but are so simple that they can be *solved* by rather small classical and quantum models (small meaning under 100 parameters), which makes scaling analysis difficult. Other more scalable and complex environments, like Frozen Lake and Atari have the problem that the state spaces are too large for meaningful encoding into a quantum circuit. Some thoughts on properties we in general need for a environment suitable for QRL: - The state and action spaces must be sufficiently small for an efficient encoding into a quantum state. - The dynamics must be sufficiently complex for justifying the use of QRL in the first place. - The state elements should be continuous to facilitate the continuous nature of quantum states, which is not the case for e.g. Gridworld environments. For the BeamManagement6G environment, the energy is conitnuous and the codebook element intrinsically corresponds to continuous beam angles. - Why the BeamManagement6G environment is a particularly suited real-world task with relevance of our sample complexity estimator: As we discuss in our paper, sample complexity as a metric is well defined only with respect to a given performance threshold. In the BeamManagement6G environment, different UEs may have different service quality requirements, which translates to a different error threshold epsilon. Different policies for different levels of QoS can be trained, which can be naturally translated to different performance threshold levels, as highlighted in Figure 4 in the paper.
null
null
null
null
null
null